merged
authorhoelzl
Wed, 21 Apr 2010 11:23:04 +0200
changeset 36245 af5fe3a72087
parent 36244 009b0ee1b838 (current diff)
parent 36242 d65f943b9f07 (diff)
child 36262 d7d1d87276b7
merged
Admin/isatest/settings/cygwin-poly
src/HOL/Library/RBT.thy
src/HOL/Library/Table.thy
src/HOL/Library/Tree.thy
--- a/Admin/PLATFORMS	Wed Apr 21 10:44:44 2010 +0200
+++ b/Admin/PLATFORMS	Wed Apr 21 11:23:04 2010 +0200
@@ -1,5 +1,5 @@
-Some notes on platform support of Isabelle
-==========================================
+Some notes on multi-platform support of Isabelle
+================================================
 
 Preamble
 --------
@@ -11,32 +11,34 @@
 The basic Isabelle system infrastructure provides some facilities to
 make this work, e.g. see the ML structures File and Path, or functions
 like bash_output.  The settings environment also provides some means
-for portability, e.g. jvm_path to hold up the impression that even
-Java on Windows/Cygwin adheres to Isabelle/POSIX standards.
+for portability, e.g. jvm_path to hold up the impression that Java on
+Windows/Cygwin adheres to Isabelle/POSIX standards.
 
 When producing add-on tools, it is important to stay within this clean
 room of Isabelle, and refrain from overly ambitious system hacking.
+The existing Isabelle scripts follow a certain style that might look
+odd at first sight, but reflects long years of experience in getting
+system plumbing right (which is quite hard).
 
 
 Supported platforms
 -------------------
 
 The following hardware and operating system platforms are officially
-supported by the Isabelle distribution (and bundled tools):
-
-  x86-linux
-  x86-darwin
-  x86-cygwin
-  x86_64-linux
-  x86_64-darwin
+supported by the Isabelle distribution (and bundled tools), with the
+following reference versions (which have been selected to be neither
+too old nor too new):
 
-As of Cygwin 1.7 there is only a 32 bit version of that platform.  The
-other 64 bit platforms become more and more important for power users
-and always need to be taken into account when testing tools.
+  x86-linux         Ubuntu 8.04 LTS Server
+  x86-darwin        Mac OS Leopard
+  x86-cygwin        Cygwin 1.7
 
-All of the above platforms are 100% supported -- end-users should not
-have to care about the differences at all.  There are also some
-secondary platforms where Poly/ML also happens to work:
+  x86_64-linux      Ubuntu 8.04 LTS Server (64)
+  x86_64-darwin     Mac OS Leopard
+
+All of the above platforms are 100% supported by Isabelle -- end-users
+should not have to care about the differences at all.  There are also
+some secondary platforms where Poly/ML also happens to work:
 
   ppc-darwin
   sparc-solaris
@@ -45,7 +47,27 @@
 
 There is no guarantee that Isabelle add-ons work on these fringe
 platforms.  Even Isabelle/Scala already fails on ppc-darwin due to
-lack of JVM 1.6 support on that platform.
+lack of JVM 1.6 support by Apple.
+
+
+32 bit vs. 64 bit platforms
+---------------------------
+
+64 bit hardware becomes more and more important for power users.
+Add-on tools need to work seamlessly without manual user
+configuration, although it is often sufficient to fall back on 32 bit
+executables.
+
+The ISABELLE_PLATFORM setting variable refers to the 32 bit version of
+the platform, even on 64 bit hardware.  Power-tools need to indicate
+64 bit support explicitly, via the (optional) ISABELLE_PLATFORM64
+setting.  The following bash expressions prefers the 64 bit platform,
+if that is available:
+
+  "${ISABELLE_PLATFORM64:-$ISABELLE_PLATFORM}"
+
+Note that ML and JVM may have a different idea of the platform,
+depending the the respective binaries that are actually run.
 
 
 Dependable system tools
@@ -53,17 +75,17 @@
 
 The following portable system tools can be taken for granted:
 
-  * GNU bash as uniform shell on all platforms.  Note that the POSIX
-    "standard" shell /bin/sh is *not* appropriate, because there are
-    too many different implementations of it.
+* GNU bash as uniform shell on all platforms.  The POSIX "standard"
+  shell /bin/sh is *not* appropriate, because there are too many
+  different implementations of it.
 
-  * Perl as largely portable system programming language.  In some
-    situations Python may as an alternative, but it usually performs
-    not as well in addressing various delicate details of basic
-    operating system concepts (processes, signals, sockets etc.).
+* Perl as largely portable system programming language.  In some
+  situations Python may serve as an alternative, but it usually
+  performs not as well in addressing various delicate details of
+  operating system concepts (processes, signals, sockets etc.).
 
-  * Scala with Java Runtime 1.6.  The Isabelle/Pure.jar library irons
-    out many oddities and portability problems of the Java platform.
+* Scala with Java Runtime 1.6.  The Isabelle/Pure.jar library irons
+  out many oddities and portability problems of the Java platform.
 
 
 Known problems
@@ -71,11 +93,18 @@
 
 * Mac OS: If MacPorts is installed and its version of Perl takes
   precedence over /usr/bin/perl in the PATH, then the end-user needs
-  to take care of installing add-on modules, e.g. HTTP support.  Such
-  add-ons are usually included in Apple's /usr/bin/perl by default.
+  to take care of installing extra modules, e.g. for HTTP support.
+  Such add-ons are usually included in Apple's /usr/bin/perl by
+  default.
 
 * The Java runtime has its own idea about the underlying platform,
-  e.g. on 64 bit machine Isabelle/ML could be x86-linux, but the JVM
+  e.g. on a 64 bit machine Isabelle/ML could be x86-linux, but the JVM
   could be x86_64-linux.  This affects Java native libraries in
-  particular -- which are very hard to support in a platform
-  independent manner, and should be avoided in the first place.
+  particular -- which cause extra portability problems and can make
+  the JVM crash anyway.
+
+  In Isabelle/Scala isabelle.Platform.jvm_platform identifies the JVM
+  platform.  Since there can be many different Java installations on
+  the same machine, which can also be run with different options,
+  reliable JVM platform identification works from inside the running
+  JVM only.
--- a/Admin/isatest/isatest-makeall	Wed Apr 21 10:44:44 2010 +0200
+++ b/Admin/isatest/isatest-makeall	Wed Apr 21 11:23:04 2010 +0200
@@ -10,7 +10,7 @@
 # max time until test is aborted (in sec)
 MAXTIME=28800
 
-PUBLISH_TEST=/home/isabelle-repository/repos/testtool/publish_test.py
+PUBLISH_TEST="$HOME/home/isabelle-repository/repos/testtool/publish_test.py"
 
 ## diagnostics
 
@@ -186,7 +186,7 @@
         echo >> $ERRORLOG
 
         FAIL="$FAIL$SHORT "
-        (cd $ERRORDIR; ln -s $TESTLOG)
+        (cd $ERRORDIR; cp -a $TESTLOG .)
     fi
 
     rm -f $RUNNING/$SHORT.running
--- a/Admin/isatest/isatest-makedist	Wed Apr 21 10:44:44 2010 +0200
+++ b/Admin/isatest/isatest-makedist	Wed Apr 21 11:23:04 2010 +0200
@@ -55,6 +55,7 @@
 
 echo "### cleaning up old isabelle-* directories" >> $DISTLOG 2>&1
 rm -rf $HOME/isabelle-*
+ssh atbroy102 "rm -rf /home/isatest/isabelle-cygwin-poly-e"
 
 echo "### building distribution"  >> $DISTLOG 2>&1
 mkdir -p $DISTPREFIX
@@ -81,6 +82,9 @@
 cd $DISTPREFIX >> $DISTLOG 2>&1
 $TAR xvzf `cat $DISTPREFIX/ISABELLE_DIST` >> $DISTLOG 2>&1
 
+ssh atbroy102 "rm -rf /home/isatest/isadist && mkdir -p /home/isatest/isadist" && \
+rsync -a "$HOME/isadist/." atbroy102:/home/isatest/isadist/.
+
 echo ------------------- prepared test successfully --- `date` --- $HOSTNAME >> $DISTLOG 2>&1
 
 ELAPSED=$("$HOME/bin/showtime" "$SECONDS")
@@ -106,7 +110,7 @@
 sleep 15
 $SSH macbroy6 "sleep 10800; $MAKEALL $HOME/settings/at-mac-poly-5.1-para"
 sleep 15
-$SSH atbroy102 "$MAKEALL $HOME/settings/cygwin-poly"
+$SSH atbroy102 "$MAKEALL $HOME/settings/cygwin-poly-e"
 #sleep 15
 #$SSH atbroy51 "$HOME/admin/isatest/isatest-annomaly"
 #sleep 15
--- a/Admin/isatest/settings/cygwin-poly	Wed Apr 21 10:44:44 2010 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,27 +0,0 @@
-# -*- shell-script -*- :mode=shellscript:
-
-  POLYML_HOME="/home/isatest/homebroy/home/polyml/polyml-5.3.0"
-  ML_SYSTEM="polyml-5.3.0"
-  ML_PLATFORM="x86-cygwin"
-  ML_HOME="$POLYML_HOME/$ML_PLATFORM"
-  ML_OPTIONS="-H 500"
-
-ISABELLE_HOME_USER=~/isabelle-cygwin-poly
-
-# Where to look for isabelle tools (multiple dirs separated by ':').
-ISABELLE_TOOLS="$ISABELLE_HOME/lib/Tools"
-
-# Location for temporary files (should be on a local file system).
-ISABELLE_TMP_PREFIX="/tmp/isabelle-$USER"
-
-
-# Heap input locations. ML system identifier is included in lookup.
-ISABELLE_PATH="$ISABELLE_HOME_USER/heaps:$ISABELLE_HOME/heaps"
-
-# Heap output location. ML system identifier is appended automatically later on.
-ISABELLE_OUTPUT="$ISABELLE_HOME_USER/heaps"
-ISABELLE_BROWSER_INFO="$ISABELLE_HOME_USER/browser_info"
-
-ISABELLE_USEDIR_OPTIONS="-M 1 -i true -d pdf -v true -t true"
-
-init_component "$HOME/contrib_devel/kodkodi"
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/Admin/isatest/settings/cygwin-poly-e	Wed Apr 21 11:23:04 2010 +0200
@@ -0,0 +1,27 @@
+# -*- shell-script -*- :mode=shellscript:
+
+  POLYML_HOME="/home/isatest/homebroy/home/polyml/polyml-5.3.0"
+  ML_SYSTEM="polyml-5.3.0"
+  ML_PLATFORM="x86-cygwin"
+  ML_HOME="$POLYML_HOME/$ML_PLATFORM"
+  ML_OPTIONS="-H 200"
+
+ISABELLE_HOME_USER=~/isabelle-cygwin-poly-e
+
+# Where to look for isabelle tools (multiple dirs separated by ':').
+ISABELLE_TOOLS="$ISABELLE_HOME/lib/Tools"
+
+# Location for temporary files (should be on a local file system).
+ISABELLE_TMP_PREFIX="/tmp/isabelle-$USER"
+
+
+# Heap input locations. ML system identifier is included in lookup.
+ISABELLE_PATH="$ISABELLE_HOME_USER/heaps:$ISABELLE_HOME/heaps"
+
+# Heap output location. ML system identifier is appended automatically later on.
+ISABELLE_OUTPUT="$ISABELLE_HOME_USER/heaps"
+ISABELLE_BROWSER_INFO="$ISABELLE_HOME_USER/browser_info"
+
+ISABELLE_USEDIR_OPTIONS="-M 1 -i false -d false"
+
+init_component "$HOME/contrib_devel/kodkodi"
--- a/NEWS	Wed Apr 21 10:44:44 2010 +0200
+++ b/NEWS	Wed Apr 21 11:23:04 2010 +0200
@@ -71,9 +71,15 @@
 in subsequent goal refinement steps. Tracing may also still be
 enabled or disabled via the ProofGeneral settings menu.
 
+* Separate commands 'hide_class', 'hide_type', 'hide_const',
+'hide_fact' replace the former 'hide' KIND command.  Minor
+INCOMPATIBILITY.
+
 
 *** Pure ***
 
+* Code generator: simple concept for abstract datatypes obeying invariants.
+
 * Local theory specifications may depend on extra type variables that
 are not present in the result type -- arguments TYPE('a) :: 'a itself
 are added internally.  For example:
@@ -95,9 +101,9 @@
 within a local theory context, with explicit checking of the
 constructors involved (in contrast to the raw 'syntax' versions).
 
-* Command 'typedecl' now works within a local theory context --
-without introducing dependencies on parameters or assumptions, which
-is not possible in Isabelle/Pure.
+* Commands 'types' and 'typedecl' now work within a local theory
+context -- without introducing dependencies on parameters or
+assumptions, which is not possible in Isabelle/Pure.
 
 * Proof terms: Type substitutions on proof constants now use canonical
 order of type variables. INCOMPATIBILITY: Tools working with proof
@@ -106,6 +112,10 @@
 
 *** HOL ***
 
+* Library theory 'RBT' renamed to 'RBT_Impl'; new library theory 'RBT'
+provides abstract red-black tree type which is backed by RBT_Impl
+as implementation.  INCOMPATIBILTY.
+
 * Command 'typedef' now works within a local theory context -- without
 introducing dependencies on parameters or assumptions, which is not
 possible in Isabelle/Pure/HOL.  Note that the logical environment may
@@ -292,6 +302,10 @@
 * Antiquotation @{syntax_const NAME} ensures that NAME refers to a raw
 syntax constant (cf. 'syntax' command).
 
+* Antiquotation @{make_string} inlines a function to print arbitrary
+values similar to the ML toplevel.  The result is compiler dependent
+and may fall back on "?" in certain situations.
+
 * Renamed old-style Drule.standard to Drule.export_without_context, to
 emphasize that this is in no way a standard operation.
 INCOMPATIBILITY.
@@ -325,6 +339,13 @@
 feature since Isabelle2009).  Use ISABELLE_PROCESS and ISABELLE_TOOL,
 respectively.
 
+* Old lib/scripts/polyml-platform is superseded by the
+ISABELLE_PLATFORM setting variable, which defaults to the 32 bit
+variant, even on a 64 bit machine.  The following example setting
+prefers 64 bit if available:
+
+  ML_PLATFORM="${ISABELLE_PLATFORM64:-$ISABELLE_PLATFORM}"
+
 
 
 New in Isabelle2009-1 (December 2009)
--- a/doc-src/Codegen/Thy/Program.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/Codegen/Thy/Program.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -430,7 +430,7 @@
 
 subsection {* Inductive Predicates *}
 (*<*)
-hide const append
+hide_const append
 
 inductive append
 where
--- a/doc-src/IsarImplementation/Thy/Logic.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/IsarImplementation/Thy/Logic.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -334,7 +334,7 @@
   this is a datatype with constructors @{ML Bound}, @{ML Free}, @{ML
   Var}, @{ML Const}, @{ML Abs}, @{ML "op $"}.
 
-  \item @{text "t"}~@{ML aconv}~@{text "u"} checks @{text
+  \item @{text "t"}~@{ML_text aconv}~@{text "u"} checks @{text
   "\<alpha>"}-equivalence of two terms.  This is the basic equality relation
   on type @{ML_type term}; raw datatype equality should only be used
   for operations related to parsing or printing!
@@ -552,10 +552,10 @@
   @{index_ML Thm.implies_elim: "thm -> thm -> thm"} \\
   @{index_ML Thm.generalize: "string list * string list -> int -> thm -> thm"} \\
   @{index_ML Thm.instantiate: "(ctyp * ctyp) list * (cterm * cterm) list -> thm -> thm"} \\
-  @{index_ML Thm.add_axiom: "binding * term -> theory -> thm * theory"} \\
+  @{index_ML Thm.add_axiom: "binding * term -> theory -> (string * thm) * theory"} \\
   @{index_ML Thm.add_oracle: "binding * ('a -> cterm) -> theory
   -> (string * ('a -> thm)) * theory"} \\
-  @{index_ML Thm.add_def: "bool -> bool -> binding * term -> theory -> thm * theory"} \\
+  @{index_ML Thm.add_def: "bool -> bool -> binding * term -> theory -> (string * thm) * theory"} \\
   \end{mldecls}
   \begin{mldecls}
   @{index_ML Theory.add_deps: "string -> string * typ -> (string * typ) list -> theory -> theory"} \\
--- a/doc-src/IsarImplementation/Thy/ML.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/IsarImplementation/Thy/ML.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -209,7 +209,7 @@
   options} for type @{ML_type "bool"}/@{ML_type "int"}/@{ML_type
   "string"} (see structure @{ML_struct Config} and @{ML
   Attrib.config_bool} etc.), and lists of theorems (see functor
-  @{ML_functor NamedThmsFun}).
+  @{ML_functor Named_Thms}).
 
   \item Keep components with local state information
   \emph{re-entrant}.  Instead of poking initial values into (private)
@@ -319,7 +319,7 @@
   \smallskip\begin{mldecls}
   @{ML "Sign.declare_const: (binding * typ) * mixfix
   -> theory -> term * theory"} \\
-  @{ML "Thm.add_def: bool -> bool -> binding * term -> theory -> thm * theory"}
+  @{ML "Thm.add_def: bool -> bool -> binding * term -> theory -> (string * thm) * theory"}
   \end{mldecls}
 
   \noindent Written with naive application, an addition of constant
@@ -623,7 +623,7 @@
   whenever such pure finite mappings are neccessary.
 
   The key type of tables must be given explicitly by instantiating
-  the @{ML_functor TableFun} functor which takes the key type
+  the @{ML_functor Table} functor which takes the key type
   together with its @{ML_type order}; for convience, we restrict
   here to the @{ML_struct Symtab} instance with @{ML_type string}
   as key type.
--- a/doc-src/IsarImplementation/Thy/document/Logic.tex	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/IsarImplementation/Thy/document/Logic.tex	Wed Apr 21 11:23:04 2010 +0200
@@ -560,10 +560,10 @@
   \indexdef{}{ML}{Thm.implies\_elim}\verb|Thm.implies_elim: thm -> thm -> thm| \\
   \indexdef{}{ML}{Thm.generalize}\verb|Thm.generalize: string list * string list -> int -> thm -> thm| \\
   \indexdef{}{ML}{Thm.instantiate}\verb|Thm.instantiate: (ctyp * ctyp) list * (cterm * cterm) list -> thm -> thm| \\
-  \indexdef{}{ML}{Thm.add\_axiom}\verb|Thm.add_axiom: binding * term -> theory -> thm * theory| \\
+  \indexdef{}{ML}{Thm.add\_axiom}\verb|Thm.add_axiom: binding * term -> theory -> (string * thm) * theory| \\
   \indexdef{}{ML}{Thm.add\_oracle}\verb|Thm.add_oracle: binding * ('a -> cterm) -> theory|\isasep\isanewline%
 \verb|  -> (string * ('a -> thm)) * theory| \\
-  \indexdef{}{ML}{Thm.add\_def}\verb|Thm.add_def: bool -> bool -> binding * term -> theory -> thm * theory| \\
+  \indexdef{}{ML}{Thm.add\_def}\verb|Thm.add_def: bool -> bool -> binding * term -> theory -> (string * thm) * theory| \\
   \end{mldecls}
   \begin{mldecls}
   \indexdef{}{ML}{Theory.add\_deps}\verb|Theory.add_deps: string -> string * typ -> (string * typ) list -> theory -> theory| \\
--- a/doc-src/IsarImplementation/Thy/document/ML.tex	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/IsarImplementation/Thy/document/ML.tex	Wed Apr 21 11:23:04 2010 +0200
@@ -222,7 +222,7 @@
   \secref{sec:context-data}) there are drop-in replacements that
   emulate primitive references for common cases of \emph{configuration
   options} for type \verb|bool|/\verb|int|/\verb|string| (see structure \verb|Config| and \verb|Attrib.config_bool| etc.), and lists of theorems (see functor
-  \verb|NamedThmsFun|).
+  \verb|Named_Thms|).
 
   \item Keep components with local state information
   \emph{re-entrant}.  Instead of poking initial values into (private)
@@ -368,7 +368,7 @@
   \smallskip\begin{mldecls}
   \verb|Sign.declare_const: (binding * typ) * mixfix|\isasep\isanewline%
 \verb|  -> theory -> term * theory| \\
-  \verb|Thm.add_def: bool -> bool -> binding * term -> theory -> thm * theory|
+  \verb|Thm.add_def: bool -> bool -> binding * term -> theory -> (string * thm) * theory|
   \end{mldecls}
 
   \noindent Written with naive application, an addition of constant
@@ -763,7 +763,7 @@
   whenever such pure finite mappings are neccessary.
 
   The key type of tables must be given explicitly by instantiating
-  the \verb|TableFun| functor which takes the key type
+  the \verb|Table| functor which takes the key type
   together with its \verb|order|; for convience, we restrict
   here to the \verb|Symtab| instance with \verb|string|
   as key type.
--- a/doc-src/IsarRef/Thy/HOL_Specific.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/IsarRef/Thy/HOL_Specific.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -178,7 +178,7 @@
   \end{matharray}
 
   \begin{rail}
-    'record' typespec '=' (type '+')? (constdecl +)
+    'record' typespecsorts '=' (type '+')? (constdecl +)
     ;
   \end{rail}
 
@@ -471,7 +471,7 @@
   Note that the resulting simplification and induction rules
   correspond to the transformed specification, not the one given
   originally. This usually means that each equation given by the user
-  may result in several theroems.  Also note that this automatic
+  may result in several theorems.  Also note that this automatic
   transformation only works for ML-style datatype patterns.
 
   \item @{text domintros} enables the automated generation of
--- a/doc-src/IsarRef/Thy/Spec.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/IsarRef/Thy/Spec.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -953,7 +953,7 @@
 
 text {*
   \begin{matharray}{rcll}
-    @{command_def "types"} & : & @{text "theory \<rightarrow> theory"} \\
+    @{command_def "types"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
     @{command_def "typedecl"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
     @{command_def "arities"} & : & @{text "theory \<rightarrow> theory"} & (axiomatic!) \\
   \end{matharray}
@@ -1225,11 +1225,14 @@
   \begin{matharray}{rcl}
     @{command_def "global"} & : & @{text "theory \<rightarrow> theory"} \\
     @{command_def "local"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def "hide"} & : & @{text "theory \<rightarrow> theory"} \\
+    @{command_def "hide_class"} & : & @{text "theory \<rightarrow> theory"} \\
+    @{command_def "hide_type"} & : & @{text "theory \<rightarrow> theory"} \\
+    @{command_def "hide_const"} & : & @{text "theory \<rightarrow> theory"} \\
+    @{command_def "hide_fact"} & : & @{text "theory \<rightarrow> theory"} \\
   \end{matharray}
 
   \begin{rail}
-    'hide' ('(open)')? name (nameref + )
+    ( 'hide\_class' | 'hide\_type' | 'hide\_const' | 'hide\_fact' ) ('(open)')? (nameref + )
     ;
   \end{rail}
 
@@ -1251,17 +1254,20 @@
   Note that global names are prone to get hidden accidently later,
   when qualified names of the same base name are introduced.
   
-  \item @{command "hide"}~@{text "space names"} fully removes
-  declarations from a given name space (which may be @{text "class"},
-  @{text "type"}, @{text "const"}, or @{text "fact"}); with the @{text
-  "(open)"} option, only the base name is hidden.  Global
-  (unqualified) names may never be hidden.
-  
+  \item @{command "hide_class"}~@{text names} fully removes class
+  declarations from a given name space; with the @{text "(open)"}
+  option, only the base name is hidden.  Global (unqualified) names
+  may never be hidden.
+
   Note that hiding name space accesses has no impact on logical
   declarations --- they remain valid internally.  Entities that are no
   longer accessible to the user are printed with the special qualifier
   ``@{text "??"}'' prefixed to the full internal name.
 
+  \item @{command "hide_type"}, @{command "hide_const"}, and @{command
+  "hide_fact"} are similar to @{command "hide_class"}, but hide types,
+  constants, and facts, respectively.
+  
   \end{description}
 *}
 
--- a/doc-src/IsarRef/Thy/document/HOL_Specific.tex	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/IsarRef/Thy/document/HOL_Specific.tex	Wed Apr 21 11:23:04 2010 +0200
@@ -202,7 +202,7 @@
   \end{matharray}
 
   \begin{rail}
-    'record' typespec '=' (type '+')? (constdecl +)
+    'record' typespecsorts '=' (type '+')? (constdecl +)
     ;
   \end{rail}
 
@@ -479,7 +479,7 @@
   Note that the resulting simplification and induction rules
   correspond to the transformed specification, not the one given
   originally. This usually means that each equation given by the user
-  may result in several theroems.  Also note that this automatic
+  may result in several theorems.  Also note that this automatic
   transformation only works for ML-style datatype patterns.
 
   \item \isa{domintros} enables the automated generation of
--- a/doc-src/IsarRef/Thy/document/Spec.tex	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/IsarRef/Thy/document/Spec.tex	Wed Apr 21 11:23:04 2010 +0200
@@ -990,7 +990,7 @@
 %
 \begin{isamarkuptext}%
 \begin{matharray}{rcll}
-    \indexdef{}{command}{types}\hypertarget{command.types}{\hyperlink{command.types}{\mbox{\isa{\isacommand{types}}}}} & : & \isa{{\isachardoublequote}theory\ {\isasymrightarrow}\ theory{\isachardoublequote}} \\
+    \indexdef{}{command}{types}\hypertarget{command.types}{\hyperlink{command.types}{\mbox{\isa{\isacommand{types}}}}} & : & \isa{{\isachardoublequote}local{\isacharunderscore}theory\ {\isasymrightarrow}\ local{\isacharunderscore}theory{\isachardoublequote}} \\
     \indexdef{}{command}{typedecl}\hypertarget{command.typedecl}{\hyperlink{command.typedecl}{\mbox{\isa{\isacommand{typedecl}}}}} & : & \isa{{\isachardoublequote}local{\isacharunderscore}theory\ {\isasymrightarrow}\ local{\isacharunderscore}theory{\isachardoublequote}} \\
     \indexdef{}{command}{arities}\hypertarget{command.arities}{\hyperlink{command.arities}{\mbox{\isa{\isacommand{arities}}}}} & : & \isa{{\isachardoublequote}theory\ {\isasymrightarrow}\ theory{\isachardoublequote}} & (axiomatic!) \\
   \end{matharray}
@@ -1268,11 +1268,14 @@
 \begin{matharray}{rcl}
     \indexdef{}{command}{global}\hypertarget{command.global}{\hyperlink{command.global}{\mbox{\isa{\isacommand{global}}}}} & : & \isa{{\isachardoublequote}theory\ {\isasymrightarrow}\ theory{\isachardoublequote}} \\
     \indexdef{}{command}{local}\hypertarget{command.local}{\hyperlink{command.local}{\mbox{\isa{\isacommand{local}}}}} & : & \isa{{\isachardoublequote}theory\ {\isasymrightarrow}\ theory{\isachardoublequote}} \\
-    \indexdef{}{command}{hide}\hypertarget{command.hide}{\hyperlink{command.hide}{\mbox{\isa{\isacommand{hide}}}}} & : & \isa{{\isachardoublequote}theory\ {\isasymrightarrow}\ theory{\isachardoublequote}} \\
+    \indexdef{}{command}{hide\_class}\hypertarget{command.hide-class}{\hyperlink{command.hide-class}{\mbox{\isa{\isacommand{hide{\isacharunderscore}class}}}}} & : & \isa{{\isachardoublequote}theory\ {\isasymrightarrow}\ theory{\isachardoublequote}} \\
+    \indexdef{}{command}{hide\_type}\hypertarget{command.hide-type}{\hyperlink{command.hide-type}{\mbox{\isa{\isacommand{hide{\isacharunderscore}type}}}}} & : & \isa{{\isachardoublequote}theory\ {\isasymrightarrow}\ theory{\isachardoublequote}} \\
+    \indexdef{}{command}{hide\_const}\hypertarget{command.hide-const}{\hyperlink{command.hide-const}{\mbox{\isa{\isacommand{hide{\isacharunderscore}const}}}}} & : & \isa{{\isachardoublequote}theory\ {\isasymrightarrow}\ theory{\isachardoublequote}} \\
+    \indexdef{}{command}{hide\_fact}\hypertarget{command.hide-fact}{\hyperlink{command.hide-fact}{\mbox{\isa{\isacommand{hide{\isacharunderscore}fact}}}}} & : & \isa{{\isachardoublequote}theory\ {\isasymrightarrow}\ theory{\isachardoublequote}} \\
   \end{matharray}
 
   \begin{rail}
-    'hide' ('(open)')? name (nameref + )
+    ( 'hide\_class' | 'hide\_type' | 'hide\_const' | 'hide\_fact' ) ('(open)')? (nameref + )
     ;
   \end{rail}
 
@@ -1292,16 +1295,19 @@
   Note that global names are prone to get hidden accidently later,
   when qualified names of the same base name are introduced.
   
-  \item \hyperlink{command.hide}{\mbox{\isa{\isacommand{hide}}}}~\isa{{\isachardoublequote}space\ names{\isachardoublequote}} fully removes
-  declarations from a given name space (which may be \isa{{\isachardoublequote}class{\isachardoublequote}},
-  \isa{{\isachardoublequote}type{\isachardoublequote}}, \isa{{\isachardoublequote}const{\isachardoublequote}}, or \isa{{\isachardoublequote}fact{\isachardoublequote}}); with the \isa{{\isachardoublequote}{\isacharparenleft}open{\isacharparenright}{\isachardoublequote}} option, only the base name is hidden.  Global
-  (unqualified) names may never be hidden.
-  
+  \item \hyperlink{command.hide-class}{\mbox{\isa{\isacommand{hide{\isacharunderscore}class}}}}~\isa{names} fully removes class
+  declarations from a given name space; with the \isa{{\isachardoublequote}{\isacharparenleft}open{\isacharparenright}{\isachardoublequote}}
+  option, only the base name is hidden.  Global (unqualified) names
+  may never be hidden.
+
   Note that hiding name space accesses has no impact on logical
   declarations --- they remain valid internally.  Entities that are no
   longer accessible to the user are printed with the special qualifier
   ``\isa{{\isachardoublequote}{\isacharquery}{\isacharquery}{\isachardoublequote}}'' prefixed to the full internal name.
 
+  \item \hyperlink{command.hide-type}{\mbox{\isa{\isacommand{hide{\isacharunderscore}type}}}}, \hyperlink{command.hide-const}{\mbox{\isa{\isacommand{hide{\isacharunderscore}const}}}}, and \hyperlink{command.hide-fact}{\mbox{\isa{\isacommand{hide{\isacharunderscore}fact}}}} are similar to \hyperlink{command.hide-class}{\mbox{\isa{\isacommand{hide{\isacharunderscore}class}}}}, but hide types,
+  constants, and facts, respectively.
+  
   \end{description}%
 \end{isamarkuptext}%
 \isamarkuptrue%
--- a/doc-src/LaTeXsugar/Sugar/Sugar.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/LaTeXsugar/Sugar/Sugar.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -174,6 +174,14 @@
 \begin{quote}
 \verb!@!\verb!{thm fst_conv[where b = DUMMY]}!
 \end{quote}
+Variables that are bound by quantifiers or lambdas cannot be renamed
+like this. Instead, the attribute \texttt{rename\_abs} does the
+job. It expects a list of names or underscores, similar to the
+\texttt{of} attribute:
+\begin{quote}
+\verb!@!\verb!{thm split_paired_All[rename_abs _ l r]}!
+\end{quote}
+produces @{thm split_paired_All[rename_abs _ l r]}.
 *}
 
 subsection "Inference rules"
--- a/doc-src/LaTeXsugar/Sugar/document/Sugar.tex	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/LaTeXsugar/Sugar/document/Sugar.tex	Wed Apr 21 11:23:04 2010 +0200
@@ -233,7 +233,15 @@
 \isa{fst\ {\isacharparenleft}a{\isacharcomma}\ \_{\isacharparenright}\ {\isacharequal}\ a} is produced by
 \begin{quote}
 \verb!@!\verb!{thm fst_conv[where b = DUMMY]}!
-\end{quote}%
+\end{quote}
+Variables that are bound by quantifiers or lambdas cannot be renamed
+like this. Instead, the attribute \texttt{rename\_abs} does the
+job. It expects a list of names or underscores, similar to the
+\texttt{of} attribute:
+\begin{quote}
+\verb!@!\verb!{thm split_paired_All[rename_abs _ l r]}!
+\end{quote}
+produces \isa{{\isacharparenleft}{\isasymforall}x{\isachardot}\ P\ x{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}{\isasymforall}l\ r{\isachardot}\ P\ {\isacharparenleft}l{\isacharcomma}\ r{\isacharparenright}{\isacharparenright}}.%
 \end{isamarkuptext}%
 \isamarkuptrue%
 %
--- a/doc-src/Locales/Locales/Examples.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/Locales/Locales/Examples.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -2,7 +2,7 @@
 imports Main
 begin
 
-hide %invisible const Lattices.lattice
+hide_const %invisible Lattices.lattice
 pretty_setmargin %invisible 65
 
 (*
--- a/doc-src/Locales/Locales/document/Examples.tex	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/Locales/Locales/document/Examples.tex	Wed Apr 21 11:23:04 2010 +0200
@@ -25,8 +25,8 @@
 \endisadeliminvisible
 %
 \isataginvisible
-\isacommand{hide}\isamarkupfalse%
-\ const\ Lattices{\isachardot}lattice\isanewline
+\isacommand{hide{\isacharunderscore}const}\isamarkupfalse%
+\ Lattices{\isachardot}lattice\isanewline
 \isacommand{pretty{\isacharunderscore}setmargin}\isamarkupfalse%
 \ {\isadigit{6}}{\isadigit{5}}%
 \endisataginvisible
--- a/doc-src/Nitpick/nitpick.tex	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/Nitpick/nitpick.tex	Wed Apr 21 11:23:04 2010 +0200
@@ -1445,8 +1445,8 @@
     (\!\begin{aligned}[t]%
     & \xi_1 := \xi_2,\> \xi_2 := \xi_2, \\[-2pt]
     & \textit{Branch}~\xi_1~\xi_2 := \xi_2)\end{aligned}$ \\[2\smallskipamount]
-The existence of a nonstandard model suggests that the induction hypothesis is not general enough or perhaps
-even wrong. See the ``Inductive Properties'' section of the Nitpick manual for details (``\textit{isabelle doc nitpick}'').
+The existence of a nonstandard model suggests that the induction hypothesis is not general enough or may even
+be wrong. See the Nitpick manual's ``Inductive Properties'' section for details (``\textit{isabelle doc nitpick}'').
 \postw
 
 Reading the Nitpick manual is a most excellent idea.
--- a/doc-src/System/Thy/Basics.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/System/Thy/Basics.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -162,6 +162,17 @@
   some extend. In particular, site-wide defaults may be overridden by
   a private @{verbatim "$ISABELLE_HOME_USER/etc/settings"}.
   
+  \item[@{setting_def ISABELLE_PLATFORM}@{text "\<^sup>*"}] is automatically
+  set to a symbolic identifier for the underlying hardware and
+  operating system.  The Isabelle platform identification always
+  refers to the 32 bit variant, even this is a 64 bit machine.  Note
+  that the ML or Java runtime may have a different idea, depending on
+  which binaries are actually run.
+
+  \item[@{setting_def ISABELLE_PLATFORM64}@{text "\<^sup>*"}] is similar to
+  @{setting ISABELLE_PLATFORM} but refers to the proper 64 bit variant
+  on a platform that supports this; the value is empty for 32 bit.
+
   \item[@{setting_def ISABELLE_PROCESS}@{text "\<^sup>*"}, @{setting
   ISABELLE_TOOL}@{text "\<^sup>*"}] are automatically set to the full path
   names of the @{executable "isabelle-process"} and @{executable
--- a/doc-src/System/Thy/document/Basics.tex	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/System/Thy/document/Basics.tex	Wed Apr 21 11:23:04 2010 +0200
@@ -180,6 +180,17 @@
   some extend. In particular, site-wide defaults may be overridden by
   a private \verb|$ISABELLE_HOME_USER/etc/settings|.
   
+  \item[\indexdef{}{setting}{ISABELLE\_PLATFORM}\hypertarget{setting.ISABELLE-PLATFORM}{\hyperlink{setting.ISABELLE-PLATFORM}{\mbox{\isa{\isatt{ISABELLE{\isacharunderscore}PLATFORM}}}}}\isa{{\isachardoublequote}\isactrlsup {\isacharasterisk}{\isachardoublequote}}] is automatically
+  set to a symbolic identifier for the underlying hardware and
+  operating system.  The Isabelle platform identification always
+  refers to the 32 bit variant, even this is a 64 bit machine.  Note
+  that the ML or Java runtime may have a different idea, depending on
+  which binaries are actually run.
+
+  \item[\indexdef{}{setting}{ISABELLE\_PLATFORM64}\hypertarget{setting.ISABELLE-PLATFORM64}{\hyperlink{setting.ISABELLE-PLATFORM64}{\mbox{\isa{\isatt{ISABELLE{\isacharunderscore}PLATFORM{\isadigit{6}}{\isadigit{4}}}}}}}\isa{{\isachardoublequote}\isactrlsup {\isacharasterisk}{\isachardoublequote}}] is similar to
+  \hyperlink{setting.ISABELLE-PLATFORM}{\mbox{\isa{\isatt{ISABELLE{\isacharunderscore}PLATFORM}}}} but refers to the proper 64 bit variant
+  on a platform that supports this; the value is empty for 32 bit.
+
   \item[\indexdef{}{setting}{ISABELLE\_PROCESS}\hypertarget{setting.ISABELLE-PROCESS}{\hyperlink{setting.ISABELLE-PROCESS}{\mbox{\isa{\isatt{ISABELLE{\isacharunderscore}PROCESS}}}}}\isa{{\isachardoublequote}\isactrlsup {\isacharasterisk}{\isachardoublequote}}, \hyperlink{setting.ISABELLE-TOOL}{\mbox{\isa{\isatt{ISABELLE{\isacharunderscore}TOOL}}}}\isa{{\isachardoublequote}\isactrlsup {\isacharasterisk}{\isachardoublequote}}] are automatically set to the full path
   names of the \hyperlink{executable.isabelle-process}{\mbox{\isa{\isatt{isabelle{\isacharminus}process}}}} and \hyperlink{executable.isabelle}{\mbox{\isa{\isatt{isabelle}}}} executables, respectively.  Thus other tools and scripts
   need not assume that the \hyperlink{file.$ISABELLE-HOME/bin}{\mbox{\isa{\isatt{{\isachardollar}ISABELLE{\isacharunderscore}HOME{\isacharslash}bin}}}} directory is
--- a/doc-src/TutorialI/Datatype/Nested.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/TutorialI/Datatype/Nested.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -11,7 +11,7 @@
 Consider the following model of terms
 where function symbols can be applied to a list of arguments:
 *}
-(*<*)hide const Var(*>*)
+(*<*)hide_const Var(*>*)
 datatype ('v,'f)"term" = Var 'v | App 'f "('v,'f)term list";
 
 text{*\noindent
--- a/doc-src/TutorialI/Documents/Documents.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/TutorialI/Documents/Documents.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -138,7 +138,7 @@
 *}
 
 (*<*)
-hide const xor
+hide_const xor
 setup {* Sign.add_path "version1" *}
 (*>*)
 definition xor :: "bool \<Rightarrow> bool \<Rightarrow> bool"    (infixl "\<oplus>" 60)
@@ -161,7 +161,7 @@
 *}
 
 (*<*)
-hide const xor
+hide_const xor
 setup {* Sign.add_path "version2" *}
 (*>*)
 definition xor :: "bool \<Rightarrow> bool \<Rightarrow> bool"    (infixl "[+]\<ignore>" 60)
--- a/doc-src/TutorialI/Misc/Option2.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/TutorialI/Misc/Option2.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -1,7 +1,7 @@
 (*<*)
 theory Option2 imports Main begin
-hide const None Some
-hide type option
+hide_const None Some
+hide_type option
 (*>*)
 
 text{*\indexbold{*option (type)}\indexbold{*None (constant)}%
--- a/doc-src/TutorialI/Types/Overloading.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/TutorialI/Types/Overloading.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -1,6 +1,6 @@
 (*<*)theory Overloading imports Main Setup begin
 
-hide (open) "class" plus (*>*)
+hide_class (open) plus (*>*)
 
 text {* Type classes allow \emph{overloading}; thus a constant may
 have multiple definitions at non-overlapping types. *}
--- a/doc-src/antiquote_setup.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/doc-src/antiquote_setup.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -54,7 +54,7 @@
 
 fun ml_structure (txt, _) = "functor XXX() = struct structure XX = " ^ txt ^ " end;";
 
-fun ml_functor _ = "";  (*no check!*)
+fun ml_functor (txt, _) = "ML_Env.check_functor " ^ ML_Syntax.print_string txt;
 
 fun index_ml name kind ml = ThyOutput.antiquotation name
   (Scan.lift (Args.name -- Scan.optional (Args.colon |-- Args.name) ""))
--- a/etc/isar-keywords-ZF.el	Wed Apr 21 10:44:44 2010 +0200
+++ b/etc/isar-keywords-ZF.el	Wed Apr 21 11:23:04 2010 +0200
@@ -43,7 +43,6 @@
     "classes"
     "classrel"
     "codatatype"
-    "code_abstype"
     "code_datatype"
     "code_library"
     "code_module"
@@ -83,7 +82,10 @@
     "header"
     "help"
     "hence"
-    "hide"
+    "hide_class"
+    "hide_const"
+    "hide_fact"
+    "hide_type"
     "inductive"
     "inductive_cases"
     "init_toplevel"
@@ -373,7 +375,10 @@
     "extract_type"
     "finalconsts"
     "global"
-    "hide"
+    "hide_class"
+    "hide_const"
+    "hide_fact"
+    "hide_type"
     "inductive"
     "instantiation"
     "judgment"
@@ -416,8 +421,7 @@
   '("inductive_cases"))
 
 (defconst isar-keywords-theory-goal
-  '("code_abstype"
-    "corollary"
+  '("corollary"
     "instance"
     "interpretation"
     "lemma"
--- a/etc/isar-keywords.el	Wed Apr 21 10:44:44 2010 +0200
+++ b/etc/isar-keywords.el	Wed Apr 21 11:23:04 2010 +0200
@@ -54,7 +54,6 @@
     "classes"
     "classrel"
     "code_abort"
-    "code_abstype"
     "code_class"
     "code_const"
     "code_datatype"
@@ -116,7 +115,10 @@
     "header"
     "help"
     "hence"
-    "hide"
+    "hide_class"
+    "hide_const"
+    "hide_fact"
+    "hide_type"
     "inductive"
     "inductive_cases"
     "inductive_set"
@@ -224,6 +226,7 @@
     "show"
     "simproc_setup"
     "sledgehammer"
+    "sledgehammer_params"
     "smt_status"
     "sorry"
     "specification"
@@ -494,7 +497,10 @@
     "fixrec"
     "fun"
     "global"
-    "hide"
+    "hide_class"
+    "hide_const"
+    "hide_fact"
+    "hide_type"
     "inductive"
     "inductive_set"
     "instantiation"
@@ -530,6 +536,7 @@
     "repdef"
     "setup"
     "simproc_setup"
+    "sledgehammer_params"
     "statespace"
     "syntax"
     "text"
@@ -549,7 +556,6 @@
 (defconst isar-keywords-theory-goal
   '("ax_specification"
     "boogie_vc"
-    "code_abstype"
     "code_pred"
     "corollary"
     "cpodef"
--- a/etc/settings	Wed Apr 21 10:44:44 2010 +0200
+++ b/etc/settings	Wed Apr 21 11:23:04 2010 +0200
@@ -17,7 +17,7 @@
 
 # Poly/ML 5.x (automated settings)
 POLY_HOME="$(type -p poly)"; [ -n "$POLY_HOME" ] && POLY_HOME="$(dirname "$POLY_HOME")"
-ML_PLATFORM=$("$ISABELLE_HOME/lib/scripts/polyml-platform")
+ML_PLATFORM="$ISABELLE_PLATFORM"
 ML_HOME=$(choosefrom \
   "$ISABELLE_HOME/contrib/polyml/$ML_PLATFORM" \
   "$ISABELLE_HOME/../polyml/$ML_PLATFORM" \
--- a/lib/Tools/java	Wed Apr 21 10:44:44 2010 +0200
+++ b/lib/Tools/java	Wed Apr 21 11:23:04 2010 +0200
@@ -5,4 +5,4 @@
 # DESCRIPTION: invoke Java within the Isabelle environment
 
 CLASSPATH="$(jvmpath "$CLASSPATH")"
-exec "$ISABELLE_JAVA" "$@"
+exec "${THIS_JAVA:-$ISABELLE_JAVA}" "$@"
--- a/lib/scripts/getsettings	Wed Apr 21 10:44:44 2010 +0200
+++ b/lib/scripts/getsettings	Wed Apr 21 11:23:04 2010 +0200
@@ -27,6 +27,9 @@
   "$ISABELLE_TOOL" "$@"
 }
 
+#platform
+. "$ISABELLE_HOME/lib/scripts/isabelle-platform"
+
 #Isabelle distribution identifier -- filled in automatically!
 ISABELLE_IDENTIFIER=""
 
@@ -53,7 +56,7 @@
 if [ "$OSTYPE" = cygwin ]; then
   CLASSPATH="$(cygpath -u -p "$CLASSPATH")"
   function jvmpath() { cygpath -w -p "$@"; }
-  CYGWIN_ROOT="$(jvmpath "/")"
+  THIS_CYGWIN="$(jvmpath "/")"
 else
   function jvmpath() { echo "$@"; }
 fi
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/lib/scripts/isabelle-platform	Wed Apr 21 11:23:04 2010 +0200
@@ -0,0 +1,67 @@
+#
+# determine general hardware and operating system type for Isabelle
+#
+# NOTE: The ML system or JVM may have their own idea about the platform!
+
+ISABELLE_PLATFORM="unknown-platform"
+ISABELLE_PLATFORM64=""
+
+case $(uname -s) in
+  Linux)
+    case $(uname -m) in
+      i?86)
+        ISABELLE_PLATFORM=x86-linux
+        ;;
+      x86_64)
+        ISABELLE_PLATFORM=x86-linux
+        ISABELLE_PLATFORM64=x86_64-linux
+        ;;
+    esac
+    ;;
+  Darwin)
+    case $(uname -m) in
+      i?86)
+        ISABELLE_PLATFORM=x86-darwin
+        if [ "$(sysctl -n hw.optional.x86_64 2>/dev/null)" = 1 ]; then
+          ISABELLE_PLATFORM64=x86_64-darwin
+        fi
+        ;;
+      x86_64)
+        ISABELLE_PLATFORM=x86-darwin
+        ISABELLE_PLATFORM64=x86_64-darwin
+        ;;
+      Power* | power* | ppc)
+        ISABELLE_PLATFORM=ppc-darwin
+        ;;
+    esac
+    ;;
+  CYGWIN_NT*)
+    case $(uname -m) in
+      i?86 | x86_64)
+        ISABELLE_PLATFORM=x86-cygwin
+        ;;
+    esac
+    ;;
+  SunOS)
+    case $(uname -r) in
+      5.*)
+        case $(uname -p) in
+          sparc)
+            ISABELLE_PLATFORM=sparc-solaris
+            ;;
+          i?86 | x86_64)
+            ISABELLE_PLATFORM=x86-solaris
+            ;;
+        esac
+        ;;
+    esac
+    ;;
+  FreeBSD|NetBSD)
+    case $(uname -m) in
+      i?86 | x86_64)
+        ISABELLE_PLATFORM=x86-bsd
+        ;;
+    esac
+    ;;
+esac
+
--- a/lib/scripts/polyml-platform	Wed Apr 21 10:44:44 2010 +0200
+++ b/lib/scripts/polyml-platform	Wed Apr 21 11:23:04 2010 +0200
@@ -1,69 +1,4 @@
 #!/usr/bin/env bash
-#
-# polyml-platform --- determine Poly/ML's idea of current hardware and
-# operating system type
-#
-# NOTE: platform identifiers should be kept as generic as possible,
-# i.e. shared by compatible environments.
-
-PLATFORM="unknown-platform"
 
-case $(uname -s) in
-  SunOS)
-    case $(uname -r) in
-      5.*)
-        case $(uname -p) in
-          sparc)
-            PLATFORM=sparc-solaris
-            ;;
-          i?86)
-            PLATFORM=x86-solaris
-            ;;
-        esac
-        ;;
-    esac
-    ;;
-  Linux)
-    case $(uname -m) in
-      i?86 | x86_64)
-        PLATFORM=x86-linux
-        ;;
-      Power* | power* | ppc)
-        PLATFORM=ppc-linux
-        ;;
-    esac
-    ;;
-  FreeBSD|NetBSD)
-    case $(uname -m) in
-      i?86)
-        PLATFORM=x86-bsd
-        ;;
-    esac
-    ;;
-  Darwin)
-    case $(uname -m) in
-      Power* | power* | ppc)
-        PLATFORM=ppc-darwin
-        ;;
-      i?86)
-        PLATFORM=x86-darwin
-        ;;
-    esac
-    ;;
-  CYGWIN_NT*)
-    case $(uname -m) in
-      i?86)
-        PLATFORM=x86-cygwin
-        ;;
-    esac
-    ;;
-  Windows_NT)
-    case $(uname -m) in
-      ?86)
-        PLATFORM=x86-win32
-        ;;
-    esac
-    ;;
-esac
-
-echo "$PLATFORM"
+echo "### Legacy feature: polyml-platform script is superseded by ISABELLE_PLATFORM" >&2
+echo "$ISABELLE_PLATFORM"
--- a/src/FOL/FOL.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/FOL/FOL.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -371,7 +371,7 @@
 lemmas induct_rulify_fallback =
   induct_forall_def induct_implies_def induct_equal_def induct_conj_def
 
-hide const induct_forall induct_implies induct_equal induct_conj
+hide_const induct_forall induct_implies induct_equal induct_conj
 
 
 text {* Method setup. *}
--- a/src/HOL/Bali/Basis.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Bali/Basis.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -176,7 +176,7 @@
 
 section "sums"
 
-hide const In0 In1
+hide_const In0 In1
 
 notation sum_case  (infixr "'(+')"80)
 
--- a/src/HOL/Bali/TypeSafe.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Bali/TypeSafe.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -9,7 +9,7 @@
 
 section "error free"
  
-hide const field
+hide_const field
 
 lemma error_free_halloc:
   assumes halloc: "G\<turnstile>s0 \<midarrow>halloc oi\<succ>a\<rightarrow> s1" and
--- a/src/HOL/Code_Evaluation.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Code_Evaluation.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -280,8 +280,8 @@
 end
 *}
 
-hide const dummy_term App valapp
-hide (open) const Const termify valtermify term_of term_of_num
+hide_const dummy_term App valapp
+hide_const (open) Const termify valtermify term_of term_of_num
 
 subsection {* Tracing of generated and evaluated code *}
 
@@ -301,7 +301,7 @@
 code_const "tracing :: String.literal => 'a => 'a"
   (Eval "Code'_Evaluation.tracing")
 
-hide (open) const tracing
+hide_const (open) tracing
 code_reserved Eval Code_Evaluation
 
 subsection {* Evaluation setup *}
--- a/src/HOL/Code_Numeral.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Code_Numeral.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -278,7 +278,7 @@
   then show ?thesis by (auto simp add: int_of_def mult_ac)
 qed
 
-hide (open) const of_nat nat_of int_of
+hide_const (open) of_nat nat_of int_of
 
 subsubsection {* Lazy Evaluation of an indexed function *}
 
@@ -289,7 +289,7 @@
 
 termination by (relation "measure (%(f, n, m). Code_Numeral.nat_of (m + 1 - n))") auto
 
-hide (open) const iterate_upto
+hide_const (open) iterate_upto
 
 subsection {* Code generator setup *}
 
--- a/src/HOL/DSequence.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/DSequence.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -97,13 +97,13 @@
 
 code_reserved Eval DSequence
 (*
-hide type Sequence.seq
+hide_type Sequence.seq
 
-hide const Sequence.Seq Sequence.yield Sequence.yieldn Sequence.empty Sequence.single
+hide_const Sequence.Seq Sequence.yield Sequence.yieldn Sequence.empty Sequence.single
   Sequence.append Sequence.flat Sequence.map Sequence.bind Sequence.ifpred Sequence.not_seq
 *)
-hide (open) const empty single eval map_seq bind union if_seq not_seq map
-hide (open) fact empty_def single_def eval.simps map_seq.simps bind_def union_def
+hide_const (open) empty single eval map_seq bind union if_seq not_seq map
+hide_fact (open) empty_def single_def eval.simps map_seq.simps bind_def union_def
   if_seq_def not_seq_def map_def
 
 end
--- a/src/HOL/Datatype.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Datatype.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -512,8 +512,8 @@
 
 
 text {* hides popular names *}
-hide (open) type node item
-hide (open) const Push Node Atom Leaf Numb Lim Split Case
+hide_type (open) node item
+hide_const (open) Push Node Atom Leaf Numb Lim Split Case
 
 use "Tools/Datatype/datatype.ML"
 
--- a/src/HOL/Finite_Set.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Finite_Set.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -513,7 +513,7 @@
 class finite =
   assumes finite_UNIV: "finite (UNIV \<Colon> 'a set)"
 setup {* Sign.parent_path *}
-hide const finite
+hide_const finite
 
 context finite
 begin
--- a/src/HOL/Fun.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Fun.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -525,7 +525,7 @@
 lemma bij_swap_iff: "bij (swap a b f) = bij f"
 by (simp add: bij_def)
 
-hide (open) const swap
+hide_const (open) swap
 
 
 subsection {* Inversion of injective functions *}
--- a/src/HOL/Groups.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Groups.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -88,7 +88,7 @@
 class one =
   fixes one  :: 'a  ("1")
 
-hide (open) const zero one
+hide_const (open) zero one
 
 syntax
   "_index1"  :: index    ("\<^sub>1")
--- a/src/HOL/HOL.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/HOL.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -1567,7 +1567,7 @@
 lemma [induct_simp]: "(x = x) = True" 
   by (rule simp_thms)
 
-hide const induct_forall induct_implies induct_equal induct_conj induct_true induct_false
+hide_const induct_forall induct_implies induct_equal induct_conj induct_true induct_false
 
 use "~~/src/Tools/induct_tacs.ML"
 setup InductTacs.setup
@@ -1886,8 +1886,8 @@
   Nbe.add_const_alias @{thm equals_alias_cert}
 *}
 
-hide (open) const eq
-hide const eq
+hide_const (open) eq
+hide_const eq
 
 text {* Cases *}
 
--- a/src/HOL/Imperative_HOL/Array.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Imperative_HOL/Array.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -92,7 +92,7 @@
      return a
    done)"
 
-hide (open) const new map -- {* avoid clashed with some popular names *}
+hide_const (open) new map -- {* avoid clashed with some popular names *}
 
 
 subsection {* Properties *}
@@ -115,28 +115,28 @@
 
 definition new' where
   [code del]: "new' = Array.new o Code_Numeral.nat_of"
-hide (open) const new'
+hide_const (open) new'
 lemma [code]:
   "Array.new = Array.new' o Code_Numeral.of_nat"
   by (simp add: new'_def o_def)
 
 definition of_list' where
   [code del]: "of_list' i xs = Array.of_list (take (Code_Numeral.nat_of i) xs)"
-hide (open) const of_list'
+hide_const (open) of_list'
 lemma [code]:
   "Array.of_list xs = Array.of_list' (Code_Numeral.of_nat (List.length xs)) xs"
   by (simp add: of_list'_def)
 
 definition make' where
   [code del]: "make' i f = Array.make (Code_Numeral.nat_of i) (f o Code_Numeral.of_nat)"
-hide (open) const make'
+hide_const (open) make'
 lemma [code]:
   "Array.make n f = Array.make' (Code_Numeral.of_nat n) (f o Code_Numeral.nat_of)"
   by (simp add: make'_def o_def)
 
 definition length' where
   [code del]: "length' = Array.length \<guillemotright>== liftM Code_Numeral.of_nat"
-hide (open) const length'
+hide_const (open) length'
 lemma [code]:
   "Array.length = Array.length' \<guillemotright>== liftM Code_Numeral.nat_of"
   by (simp add: length'_def monad_simp',
@@ -145,14 +145,14 @@
 
 definition nth' where
   [code del]: "nth' a = Array.nth a o Code_Numeral.nat_of"
-hide (open) const nth'
+hide_const (open) nth'
 lemma [code]:
   "Array.nth a n = Array.nth' a (Code_Numeral.of_nat n)"
   by (simp add: nth'_def)
 
 definition upd' where
   [code del]: "upd' a i x = Array.upd (Code_Numeral.nat_of i) x a \<guillemotright> return ()"
-hide (open) const upd'
+hide_const (open) upd'
 lemma [code]:
   "Array.upd i x a = Array.upd' a (Code_Numeral.of_nat i) x \<guillemotright> return a"
   by (simp add: upd'_def monad_simp upd_return)
--- a/src/HOL/Imperative_HOL/Heap.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Imperative_HOL/Heap.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -432,6 +432,6 @@
   "array_present a h \<Longrightarrow> array_present a (snd (ref v h))"
   by (simp add: array_present_def new_ref_def ref_def Let_def)
 
-hide (open) const empty array array_of_list upd length ref
+hide_const (open) empty array array_of_list upd length ref
 
 end
--- a/src/HOL/Imperative_HOL/Heap_Monad.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Imperative_HOL/Heap_Monad.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -405,7 +405,7 @@
 lemmas MREC_rule = mrec.MREC_rule
 lemmas MREC_pinduct = mrec.MREC_pinduct
 
-hide (open) const heap execute
+hide_const (open) heap execute
 
 
 subsection {* Code generator setup *}
@@ -426,7 +426,7 @@
   "raise s = raise_exc (Fail (STR s))"
   unfolding Fail_def raise_exc_def raise_def ..
 
-hide (open) const Fail raise_exc
+hide_const (open) Fail raise_exc
 
 
 subsubsection {* SML and OCaml *}
--- a/src/HOL/Imperative_HOL/Ref.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Imperative_HOL/Ref.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -41,7 +41,7 @@
                     return y
                  done)"
 
-hide (open) const new lookup update change
+hide_const (open) new lookup update change
 
 
 subsection {* Properties *}
--- a/src/HOL/Imperative_HOL/ex/Imperative_Reverse.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Imperative_HOL/ex/Imperative_Reverse.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -8,7 +8,7 @@
 imports "~~/src/HOL/Imperative_HOL/Imperative_HOL" Subarray
 begin
 
-hide (open) const swap rev
+hide_const (open) swap rev
 
 fun swap :: "'a\<Colon>heap array \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> unit Heap" where
   "swap a i j = (do
--- a/src/HOL/Imperative_HOL/ex/SatChecker.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Imperative_HOL/ex/SatChecker.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -5,7 +5,7 @@
 header {* An efficient checker for proofs from a SAT solver *}
 
 theory SatChecker
-imports RBT Sorted_List "~~/src/HOL/Imperative_HOL/Imperative_HOL" 
+imports RBT_Impl Sorted_List "~~/src/HOL/Imperative_HOL/Imperative_HOL" 
 begin
 
 section{* General settings and functions for our representation of clauses *}
@@ -635,24 +635,24 @@
 
 section {* Functional version with RedBlackTrees *}
 
-fun tres_thm :: "(ClauseId, Clause) rbt \<Rightarrow> Lit \<times> ClauseId \<Rightarrow> Clause \<Rightarrow> Clause Heap"
+fun tres_thm :: "(ClauseId, Clause) RBT_Impl.rbt \<Rightarrow> Lit \<times> ClauseId \<Rightarrow> Clause \<Rightarrow> Clause Heap"
 where
   "tres_thm t (l, j) cli =
-  (case (RBT.lookup t j) of 
+  (case (RBT_Impl.lookup t j) of 
      None \<Rightarrow> raise (''MiniSatChecked.res_thm: No resolvant clause in thms array for Conflict step.'')
    | Some clj \<Rightarrow> res_thm' l cli clj)"
 
-fun tdoProofStep :: " ProofStep \<Rightarrow> ((ClauseId, Clause) rbt * Clause list) \<Rightarrow> ((ClauseId, Clause) rbt * Clause list) Heap"
+fun tdoProofStep :: " ProofStep \<Rightarrow> ((ClauseId, Clause) RBT_Impl.rbt * Clause list) \<Rightarrow> ((ClauseId, Clause) RBT_Impl.rbt * Clause list) Heap"
 where
   "tdoProofStep (Conflict saveTo (i, rs)) (t, rcl) =
-     (case (RBT.lookup t i) of
+     (case (RBT_Impl.lookup t i) of
        None \<Rightarrow> raise (''MiniSatChecked.doProofStep: No starting clause in thms array for Conflict step.'')
      | Some cli \<Rightarrow> (do
                       result \<leftarrow> foldM (tres_thm t) rs cli;
-                      return ((RBT.insert saveTo result t), rcl)
+                      return ((RBT_Impl.insert saveTo result t), rcl)
                     done))"
-| "tdoProofStep (Delete cid) (t, rcl) = return ((RBT.delete cid t), rcl)"
-| "tdoProofStep (Root cid clause) (t, rcl) = return (RBT.insert cid (sort clause) t, (remdups(sort clause)) # rcl)"
+| "tdoProofStep (Delete cid) (t, rcl) = return ((RBT_Impl.delete cid t), rcl)"
+| "tdoProofStep (Root cid clause) (t, rcl) = return (RBT_Impl.insert cid (sort clause) t, (remdups(sort clause)) # rcl)"
 | "tdoProofStep (Xstep cid1 cid2) (t, rcl) = raise ''MiniSatChecked.doProofStep: Xstep constructor found.''"
 | "tdoProofStep (ProofDone b) (t, rcl) = raise ''MiniSatChecked.doProofStep: ProofDone constructor found.''"
 
@@ -660,8 +660,8 @@
 where
   "tchecker n p i =
   (do 
-     rcs \<leftarrow> foldM (tdoProofStep) p (RBT.Empty, []);
-     (if (RBT.lookup (fst rcs) i) = Some [] then return (snd rcs) 
+     rcs \<leftarrow> foldM (tdoProofStep) p (RBT_Impl.Empty, []);
+     (if (RBT_Impl.lookup (fst rcs) i) = Some [] then return (snd rcs) 
                 else raise(''No empty clause''))
    done)"
 
--- a/src/HOL/Int.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Int.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -2263,7 +2263,7 @@
 lemma [code]: "nat i = nat_aux i 0"
   by (simp add: nat_aux_def)
 
-hide (open) const nat_aux
+hide_const (open) nat_aux
 
 lemma zero_is_num_zero [code, code_unfold_post]:
   "(0\<Colon>int) = Numeral0" 
@@ -2325,7 +2325,7 @@
 
 quickcheck_params [default_type = int]
 
-hide (open) const Pls Min Bit0 Bit1 succ pred
+hide_const (open) Pls Min Bit0 Bit1 succ pred
 
 
 subsection {* Legacy theorems *}
--- a/src/HOL/IsaMakefile	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/IsaMakefile	Wed Apr 21 11:23:04 2010 +0200
@@ -1,3 +1,4 @@
+
 #
 # IsaMakefile for HOL
 #
@@ -406,16 +407,17 @@
   Library/Library/ROOT.ML Library/Library/document/root.tex		\
   Library/Library/document/root.bib					\
   Library/Transitive_Closure_Table.thy Library/While_Combinator.thy	\
-  Library/Product_ord.thy Library/Char_nat.thy Library/Table.thy	\
+  Library/Product_ord.thy Library/Char_nat.thy				\
   Library/Sublist_Order.thy Library/List_lexord.thy			\
   Library/AssocList.thy Library/Formal_Power_Series.thy			\
   Library/Binomial.thy Library/Eval_Witness.thy Library/Code_Char.thy	\
   Library/Code_Char_chr.thy Library/Code_Integer.thy			\
   Library/Mapping.thy Library/Numeral_Type.thy Library/Reflection.thy	\
   Library/Boolean_Algebra.thy Library/Countable.thy			\
-  Library/Diagonalize.thy Library/RBT.thy Library/Univ_Poly.thy		\
+  Library/Diagonalize.thy Library/RBT.thy Library/RBT_Impl.thy		\
+  Library/Univ_Poly.thy							\
   Library/Poly_Deriv.thy Library/Polynomial.thy Library/Preorder.thy	\
-  Library/Product_plus.thy Library/Product_Vector.thy Library/Tree.thy	\
+  Library/Product_plus.thy Library/Product_Vector.thy 			\
   Library/Enum.thy Library/Float.thy Library/Quotient_List.thy		\
   Library/Quotient_Option.thy Library/Quotient_Product.thy		\
   Library/Quotient_Sum.thy Library/Quotient_Syntax.thy			\
--- a/src/HOL/Lattice/Bounds.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Lattice/Bounds.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -6,7 +6,7 @@
 
 theory Bounds imports Orders begin
 
-hide (open) const inf sup
+hide_const (open) inf sup
 
 subsection {* Infimum and supremum *}
 
--- a/src/HOL/Lazy_Sequence.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Lazy_Sequence.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -212,8 +212,8 @@
 where
   "hb_not_seq xq = (case yield xq of None => single () | Some (x, xq) => empty)"
 
-hide (open) type lazy_sequence
-hide (open) const Empty Insert Lazy_Sequence yield empty single append flat map bind if_seq iterate_upto not_seq
-hide fact yield.simps empty_def single_def append.simps flat.simps map.simps bind_def iterate_upto.simps if_seq_def not_seq_def
+hide_type (open) lazy_sequence
+hide_const (open) Empty Insert Lazy_Sequence yield empty single append flat map bind if_seq iterate_upto not_seq
+hide_fact yield.simps empty_def single_def append.simps flat.simps map.simps bind_def iterate_upto.simps if_seq_def not_seq_def
 
 end
--- a/src/HOL/Library/AssocList.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Library/AssocList.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -659,49 +659,49 @@
 
 subsection {* Implementation of mappings *}
 
-definition AList :: "('a \<times> 'b) list \<Rightarrow> ('a, 'b) mapping" where
-  "AList xs = Mapping (map_of xs)"
+definition Mapping :: "('a \<times> 'b) list \<Rightarrow> ('a, 'b) mapping" where
+  "Mapping xs = Mapping.Mapping (map_of xs)"
 
-code_datatype AList
+code_datatype Mapping
 
-lemma lookup_AList [simp, code]:
-  "Mapping.lookup (AList xs) = map_of xs"
-  by (simp add: AList_def)
+lemma lookup_Mapping [simp, code]:
+  "Mapping.lookup (Mapping xs) = map_of xs"
+  by (simp add: Mapping_def)
 
-lemma empty_AList [code]:
-  "Mapping.empty = AList []"
+lemma empty_Mapping [code]:
+  "Mapping.empty = Mapping []"
   by (rule mapping_eqI) simp
 
-lemma is_empty_AList [code]:
-  "Mapping.is_empty (AList xs) \<longleftrightarrow> null xs"
+lemma is_empty_Mapping [code]:
+  "Mapping.is_empty (Mapping xs) \<longleftrightarrow> null xs"
   by (cases xs) (simp_all add: is_empty_def)
 
-lemma update_AList [code]:
-  "Mapping.update k v (AList xs) = AList (update k v xs)"
+lemma update_Mapping [code]:
+  "Mapping.update k v (Mapping xs) = Mapping (update k v xs)"
   by (rule mapping_eqI) (simp add: update_conv')
 
-lemma delete_AList [code]:
-  "Mapping.delete k (AList xs) = AList (delete k xs)"
+lemma delete_Mapping [code]:
+  "Mapping.delete k (Mapping xs) = Mapping (delete k xs)"
   by (rule mapping_eqI) (simp add: delete_conv')
 
-lemma keys_AList [code]:
-  "Mapping.keys (AList xs) = set (map fst xs)"
+lemma keys_Mapping [code]:
+  "Mapping.keys (Mapping xs) = set (map fst xs)"
   by (simp add: keys_def dom_map_of_conv_image_fst)
 
-lemma ordered_keys_AList [code]:
-  "Mapping.ordered_keys (AList xs) = sort (remdups (map fst xs))"
-  by (simp only: ordered_keys_def keys_AList sorted_list_of_set_sort_remdups)
+lemma ordered_keys_Mapping [code]:
+  "Mapping.ordered_keys (Mapping xs) = sort (remdups (map fst xs))"
+  by (simp only: ordered_keys_def keys_Mapping sorted_list_of_set_sort_remdups)
 
-lemma size_AList [code]:
-  "Mapping.size (AList xs) = length (remdups (map fst xs))"
+lemma size_Mapping [code]:
+  "Mapping.size (Mapping xs) = length (remdups (map fst xs))"
   by (simp add: size_def length_remdups_card_conv dom_map_of_conv_image_fst)
 
-lemma tabulate_AList [code]:
-  "Mapping.tabulate ks f = AList (map (\<lambda>k. (k, f k)) ks)"
+lemma tabulate_Mapping [code]:
+  "Mapping.tabulate ks f = Mapping (map (\<lambda>k. (k, f k)) ks)"
   by (rule mapping_eqI) (simp add: map_of_map_restrict)
 
-lemma bulkload_AList [code]:
-  "Mapping.bulkload vs = AList (map (\<lambda>n. (n, vs ! n)) [0..<length vs])"
+lemma bulkload_Mapping [code]:
+  "Mapping.bulkload vs = Mapping (map (\<lambda>n. (n, vs ! n)) [0..<length vs])"
   by (rule mapping_eqI) (simp add: map_of_map_restrict expand_fun_eq)
 
 end
--- a/src/HOL/Library/Dlist.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Library/Dlist.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -47,6 +47,7 @@
   show "[] \<in> ?dlist" by simp
 qed
 
+
 text {* Formal, totalized constructor for @{typ "'a dlist"}: *}
 
 definition Dlist :: "'a list \<Rightarrow> 'a dlist" where
@@ -60,7 +61,7 @@
   "list_of_dlist (Dlist xs) = remdups xs"
   by (simp add: Dlist_def Abs_dlist_inverse)
 
-lemma Dlist_list_of_dlist [simp]:
+lemma Dlist_list_of_dlist [simp, code abstype]:
   "Dlist (list_of_dlist dxs) = dxs"
   by (simp add: Dlist_def list_of_dlist_inverse distinct_remdups_id)
 
@@ -100,9 +101,6 @@
 
 section {* Executable version obeying invariant *}
 
-code_abstype Dlist list_of_dlist
-  by simp
-
 lemma list_of_dlist_empty [simp, code abstract]:
   "list_of_dlist empty = []"
   by (simp add: empty_def)
@@ -249,6 +247,6 @@
 
 end
 
-hide (open) const member fold empty insert remove map filter null member length fold
+hide_const (open) member fold empty insert remove map filter null member length fold
 
 end
--- a/src/HOL/Library/Efficient_Nat.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Library/Efficient_Nat.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -496,6 +496,6 @@
 code_modulename Haskell
   Efficient_Nat Arith
 
-hide const int
+hide_const int
 
 end
--- a/src/HOL/Library/Executable_Set.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Library/Executable_Set.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -265,7 +265,7 @@
   "Union (Coset []) = Coset []"
   unfolding Union_def Sup_sup by simp_all
 
-hide (open) const is_empty empty remove
+hide_const (open) is_empty empty remove
   set_eq subset_eq subset inter union subtract Inf Sup Inter Union
 
 end
--- a/src/HOL/Library/Fset.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Library/Fset.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -293,7 +293,7 @@
 declare mem_def [simp del]
 
 
-hide (open) const is_empty insert remove map filter forall exists card
+hide_const (open) is_empty insert remove map filter forall exists card
   Inter Union
 
 end
--- a/src/HOL/Library/Library.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Library/Library.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -57,7 +57,6 @@
   SML_Quickcheck
   State_Monad
   Sum_Of_Squares
-  Table
   Transitive_Closure_Table
   Univ_Poly
   While_Combinator
--- a/src/HOL/Library/Mapping.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Library/Mapping.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -122,6 +122,10 @@
   "bulkload xs = tabulate [0..<length xs] (nth xs)"
   by (rule mapping_eqI) (simp add: expand_fun_eq)
 
+lemma is_empty_empty:
+  "is_empty m \<longleftrightarrow> m = Mapping Map.empty"
+  by (cases m) (simp add: is_empty_def)
+
 
 subsection {* Some technical code lemmas *}
 
@@ -142,6 +146,6 @@
   by (cases m) simp
 
 
-hide (open) const empty is_empty lookup update delete ordered_keys keys size replace tabulate bulkload
+hide_const (open) empty is_empty lookup update delete ordered_keys keys size replace tabulate bulkload
 
 end
\ No newline at end of file
--- a/src/HOL/Library/Multiset.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Library/Multiset.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -1001,7 +1001,7 @@
 no_notation fcomp (infixl "o>" 60)
 no_notation scomp (infixl "o\<rightarrow>" 60)
 
-hide (open) const bagify
+hide_const (open) bagify
 
 
 subsection {* The multiset order *}
--- a/src/HOL/Library/Nested_Environment.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Library/Nested_Environment.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -54,7 +54,7 @@
   | "lookup_option None xs = None"
   | "lookup_option (Some e) xs = lookup e xs"
 
-hide const lookup_option
+hide_const lookup_option
 
 text {*
   \medskip The characteristic cases of @{term lookup} are expressed by
@@ -262,7 +262,7 @@
   | "update_option xs opt (Some e) =
       (if xs = [] then opt else Some (update xs opt e))"
 
-hide const update_option
+hide_const update_option
 
 text {*
   \medskip The characteristic cases of @{term update} are expressed by
--- a/src/HOL/Library/Quotient_List.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Library/Quotient_List.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -130,24 +130,24 @@
   by (induct l)
      (simp_all add: Quotient_abs_rep[OF a] Quotient_abs_rep[OF b])
 
-
 lemma map_prs[quot_preserve]:
   assumes a: "Quotient R1 abs1 rep1"
   and     b: "Quotient R2 abs2 rep2"
   shows "((abs1 ---> rep2) ---> (map rep1) ---> (map abs2)) map = map"
-  by (simp only: expand_fun_eq fun_map_def map_prs_aux[OF a b])
-     (simp)
-
+  and   "((abs1 ---> id) ---> map rep1 ---> id) map = map"
+  by (simp_all only: expand_fun_eq fun_map_def map_prs_aux[OF a b])
+     (simp_all add: Quotient_abs_rep[OF a])
 
 lemma map_rsp[quot_respect]:
   assumes q1: "Quotient R1 Abs1 Rep1"
   and     q2: "Quotient R2 Abs2 Rep2"
   shows "((R1 ===> R2) ===> (list_rel R1) ===> list_rel R2) map map"
-  apply(simp)
-  apply(rule allI)+
-  apply(rule impI)
-  apply(rule allI)+
-  apply (induct_tac xa ya rule: list_induct2')
+  and   "((R1 ===> op =) ===> (list_rel R1) ===> op =) map map"
+  apply simp_all
+  apply(rule_tac [!] allI)+
+  apply(rule_tac [!] impI)
+  apply(rule_tac [!] allI)+
+  apply (induct_tac [!] xa ya rule: list_induct2')
   apply simp_all
   done
 
@@ -217,6 +217,52 @@
   apply (simp_all)
   done
 
+lemma list_rel_rsp:
+  assumes r: "\<forall>x y. R x y \<longrightarrow> (\<forall>a b. R a b \<longrightarrow> S x a = T y b)"
+  and l1: "list_rel R x y"
+  and l2: "list_rel R a b"
+  shows "list_rel S x a = list_rel T y b"
+  proof -
+    have a: "length y = length x" by (rule list_rel_len[OF l1, symmetric])
+    have c: "length a = length b" by (rule list_rel_len[OF l2])
+    show ?thesis proof (cases "length x = length a")
+      case True
+      have b: "length x = length a" by fact
+      show ?thesis using a b c r l1 l2 proof (induct rule: list_induct4)
+        case Nil
+        show ?case using assms by simp
+      next
+        case (Cons h t)
+        then show ?case by auto
+      qed
+    next
+      case False
+      have d: "length x \<noteq> length a" by fact
+      then have e: "\<not>list_rel S x a" using list_rel_len by auto
+      have "length y \<noteq> length b" using d a c by simp
+      then have "\<not>list_rel T y b" using list_rel_len by auto
+      then show ?thesis using e by simp
+    qed
+  qed
+
+lemma[quot_respect]:
+  "((R ===> R ===> op =) ===> list_rel R ===> list_rel R ===> op =) list_rel list_rel"
+  by (simp add: list_rel_rsp)
+
+lemma[quot_preserve]:
+  assumes a: "Quotient R abs1 rep1"
+  shows "((abs1 ---> abs1 ---> id) ---> map rep1 ---> map rep1 ---> id) list_rel = list_rel"
+  apply (simp add: expand_fun_eq)
+  apply clarify
+  apply (induct_tac xa xb rule: list_induct2')
+  apply (simp_all add: Quotient_abs_rep[OF a])
+  done
+
+lemma[quot_preserve]:
+  assumes a: "Quotient R abs1 rep1"
+  shows "(list_rel ((rep1 ---> rep1 ---> id) R) l m) = (l = m)"
+  by (induct l m rule: list_induct2') (simp_all add: Quotient_rel_rep[OF a])
+
 lemma list_rel_eq[id_simps]:
   shows "(list_rel (op =)) = (op =)"
   unfolding expand_fun_eq
--- a/src/HOL/Library/RBT.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Library/RBT.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -1,1100 +1,253 @@
-(*  Title:      RBT.thy
-    Author:     Markus Reiter, TU Muenchen
-    Author:     Alexander Krauss, TU Muenchen
-*)
+(* Author: Florian Haftmann, TU Muenchen *)
 
-header {* Red-Black Trees *}
+header {* Abstract type of Red-Black Trees *}
 
 (*<*)
 theory RBT
-imports Main
+imports Main RBT_Impl Mapping
 begin
 
-subsection {* Datatype of RB trees *}
-
-datatype color = R | B
-datatype ('a, 'b) rbt = Empty | Branch color "('a, 'b) rbt" 'a 'b "('a, 'b) rbt"
-
-lemma rbt_cases:
-  obtains (Empty) "t = Empty" 
-  | (Red) l k v r where "t = Branch R l k v r" 
-  | (Black) l k v r where "t = Branch B l k v r"
-proof (cases t)
-  case Empty with that show thesis by blast
-next
-  case (Branch c) with that show thesis by (cases c) blast+
-qed
-
-subsection {* Tree properties *}
-
-subsubsection {* Content of a tree *}
-
-primrec entries :: "('a, 'b) rbt \<Rightarrow> ('a \<times> 'b) list"
-where 
-  "entries Empty = []"
-| "entries (Branch _ l k v r) = entries l @ (k,v) # entries r"
-
-abbreviation (input) entry_in_tree :: "'a \<Rightarrow> 'b \<Rightarrow> ('a, 'b) rbt \<Rightarrow> bool"
-where
-  "entry_in_tree k v t \<equiv> (k, v) \<in> set (entries t)"
-
-definition keys :: "('a, 'b) rbt \<Rightarrow> 'a list" where
-  "keys t = map fst (entries t)"
-
-lemma keys_simps [simp, code]:
-  "keys Empty = []"
-  "keys (Branch c l k v r) = keys l @ k # keys r"
-  by (simp_all add: keys_def)
-
-lemma entry_in_tree_keys:
-  assumes "(k, v) \<in> set (entries t)"
-  shows "k \<in> set (keys t)"
-proof -
-  from assms have "fst (k, v) \<in> fst ` set (entries t)" by (rule imageI)
-  then show ?thesis by (simp add: keys_def)
-qed
-
-lemma keys_entries:
-  "k \<in> set (keys t) \<longleftrightarrow> (\<exists>v. (k, v) \<in> set (entries t))"
-  by (auto intro: entry_in_tree_keys) (auto simp add: keys_def)
-
-
-subsubsection {* Search tree properties *}
-
-definition tree_less :: "'a\<Colon>order \<Rightarrow> ('a, 'b) rbt \<Rightarrow> bool"
-where
-  tree_less_prop: "tree_less k t \<longleftrightarrow> (\<forall>x\<in>set (keys t). x < k)"
-
-abbreviation tree_less_symbol (infix "|\<guillemotleft>" 50)
-where "t |\<guillemotleft> x \<equiv> tree_less x t"
-
-definition tree_greater :: "'a\<Colon>order \<Rightarrow> ('a, 'b) rbt \<Rightarrow> bool" (infix "\<guillemotleft>|" 50) 
-where
-  tree_greater_prop: "tree_greater k t = (\<forall>x\<in>set (keys t). k < x)"
-
-lemma tree_less_simps [simp]:
-  "tree_less k Empty = True"
-  "tree_less k (Branch c lt kt v rt) \<longleftrightarrow> kt < k \<and> tree_less k lt \<and> tree_less k rt"
-  by (auto simp add: tree_less_prop)
-
-lemma tree_greater_simps [simp]:
-  "tree_greater k Empty = True"
-  "tree_greater k (Branch c lt kt v rt) \<longleftrightarrow> k < kt \<and> tree_greater k lt \<and> tree_greater k rt"
-  by (auto simp add: tree_greater_prop)
-
-lemmas tree_ord_props = tree_less_prop tree_greater_prop
-
-lemmas tree_greater_nit = tree_greater_prop entry_in_tree_keys
-lemmas tree_less_nit = tree_less_prop entry_in_tree_keys
-
-lemma tree_less_eq_trans: "l |\<guillemotleft> u \<Longrightarrow> u \<le> v \<Longrightarrow> l |\<guillemotleft> v"
-  and tree_less_trans: "t |\<guillemotleft> x \<Longrightarrow> x < y \<Longrightarrow> t |\<guillemotleft> y"
-  and tree_greater_eq_trans: "u \<le> v \<Longrightarrow> v \<guillemotleft>| r \<Longrightarrow> u \<guillemotleft>| r"
-  and tree_greater_trans: "x < y \<Longrightarrow> y \<guillemotleft>| t \<Longrightarrow> x \<guillemotleft>| t"
-  by (auto simp: tree_ord_props)
-
-primrec sorted :: "('a::linorder, 'b) rbt \<Rightarrow> bool"
-where
-  "sorted Empty = True"
-| "sorted (Branch c l k v r) = (l |\<guillemotleft> k \<and> k \<guillemotleft>| r \<and> sorted l \<and> sorted r)"
-
-lemma sorted_entries:
-  "sorted t \<Longrightarrow> List.sorted (List.map fst (entries t))"
-by (induct t) 
-  (force simp: sorted_append sorted_Cons tree_ord_props 
-      dest!: entry_in_tree_keys)+
-
-lemma distinct_entries:
-  "sorted t \<Longrightarrow> distinct (List.map fst (entries t))"
-by (induct t) 
-  (force simp: sorted_append sorted_Cons tree_ord_props 
-      dest!: entry_in_tree_keys)+
-
-
-subsubsection {* Tree lookup *}
-
-primrec lookup :: "('a\<Colon>linorder, 'b) rbt \<Rightarrow> 'a \<rightharpoonup> 'b"
-where
-  "lookup Empty k = None"
-| "lookup (Branch _ l x y r) k = (if k < x then lookup l k else if x < k then lookup r k else Some y)"
-
-lemma lookup_keys: "sorted t \<Longrightarrow> dom (lookup t) = set (keys t)"
-  by (induct t) (auto simp: dom_def tree_greater_prop tree_less_prop)
+subsection {* Type definition *}
 
-lemma dom_lookup_Branch: 
-  "sorted (Branch c t1 k v t2) \<Longrightarrow> 
-    dom (lookup (Branch c t1 k v t2)) 
-    = Set.insert k (dom (lookup t1) \<union> dom (lookup t2))"
+typedef (open) ('a, 'b) rbt = "{t :: ('a\<Colon>linorder, 'b) RBT_Impl.rbt. is_rbt t}"
+  morphisms impl_of RBT
 proof -
-  assume "sorted (Branch c t1 k v t2)"
-  moreover from this have "sorted t1" "sorted t2" by simp_all
-  ultimately show ?thesis by (simp add: lookup_keys)
-qed
-
-lemma finite_dom_lookup [simp, intro!]: "finite (dom (lookup t))"
-proof (induct t)
-  case Empty then show ?case by simp
-next
-  case (Branch color t1 a b t2)
-  let ?A = "Set.insert a (dom (lookup t1) \<union> dom (lookup t2))"
-  have "dom (lookup (Branch color t1 a b t2)) \<subseteq> ?A" by (auto split: split_if_asm)
-  moreover from Branch have "finite (insert a (dom (lookup t1) \<union> dom (lookup t2)))" by simp
-  ultimately show ?case by (rule finite_subset)
-qed 
-
-lemma lookup_tree_less[simp]: "t |\<guillemotleft> k \<Longrightarrow> lookup t k = None" 
-by (induct t) auto
-
-lemma lookup_tree_greater[simp]: "k \<guillemotleft>| t \<Longrightarrow> lookup t k = None"
-by (induct t) auto
-
-lemma lookup_Empty: "lookup Empty = empty"
-by (rule ext) simp
-
-lemma map_of_entries:
-  "sorted t \<Longrightarrow> map_of (entries t) = lookup t"
-proof (induct t)
-  case Empty thus ?case by (simp add: lookup_Empty)
-next
-  case (Branch c t1 k v t2)
-  have "lookup (Branch c t1 k v t2) = lookup t2 ++ [k\<mapsto>v] ++ lookup t1"
-  proof (rule ext)
-    fix x
-    from Branch have SORTED: "sorted (Branch c t1 k v t2)" by simp
-    let ?thesis = "lookup (Branch c t1 k v t2) x = (lookup t2 ++ [k \<mapsto> v] ++ lookup t1) x"
-
-    have DOM_T1: "!!k'. k'\<in>dom (lookup t1) \<Longrightarrow> k>k'"
-    proof -
-      fix k'
-      from SORTED have "t1 |\<guillemotleft> k" by simp
-      with tree_less_prop have "\<forall>k'\<in>set (keys t1). k>k'" by auto
-      moreover assume "k'\<in>dom (lookup t1)"
-      ultimately show "k>k'" using lookup_keys SORTED by auto
-    qed
-    
-    have DOM_T2: "!!k'. k'\<in>dom (lookup t2) \<Longrightarrow> k<k'"
-    proof -
-      fix k'
-      from SORTED have "k \<guillemotleft>| t2" by simp
-      with tree_greater_prop have "\<forall>k'\<in>set (keys t2). k<k'" by auto
-      moreover assume "k'\<in>dom (lookup t2)"
-      ultimately show "k<k'" using lookup_keys SORTED by auto
-    qed
-    
-    {
-      assume C: "x<k"
-      hence "lookup (Branch c t1 k v t2) x = lookup t1 x" by simp
-      moreover from C have "x\<notin>dom [k\<mapsto>v]" by simp
-      moreover have "x\<notin>dom (lookup t2)" proof
-        assume "x\<in>dom (lookup t2)"
-        with DOM_T2 have "k<x" by blast
-        with C show False by simp
-      qed
-      ultimately have ?thesis by (simp add: map_add_upd_left map_add_dom_app_simps)
-    } moreover {
-      assume [simp]: "x=k"
-      hence "lookup (Branch c t1 k v t2) x = [k \<mapsto> v] x" by simp
-      moreover have "x\<notin>dom (lookup t1)" proof
-        assume "x\<in>dom (lookup t1)"
-        with DOM_T1 have "k>x" by blast
-        thus False by simp
-      qed
-      ultimately have ?thesis by (simp add: map_add_upd_left map_add_dom_app_simps)
-    } moreover {
-      assume C: "x>k"
-      hence "lookup (Branch c t1 k v t2) x = lookup t2 x" by (simp add: less_not_sym[of k x])
-      moreover from C have "x\<notin>dom [k\<mapsto>v]" by simp
-      moreover have "x\<notin>dom (lookup t1)" proof
-        assume "x\<in>dom (lookup t1)"
-        with DOM_T1 have "k>x" by simp
-        with C show False by simp
-      qed
-      ultimately have ?thesis by (simp add: map_add_upd_left map_add_dom_app_simps)
-    } ultimately show ?thesis using less_linear by blast
-  qed
-  also from Branch have "lookup t2 ++ [k \<mapsto> v] ++ lookup t1 = map_of (entries (Branch c t1 k v t2))" by simp
-  finally show ?case by simp
-qed
-
-lemma lookup_in_tree: "sorted t \<Longrightarrow> lookup t k = Some v \<longleftrightarrow> (k, v) \<in> set (entries t)"
-  by (simp add: map_of_entries [symmetric] distinct_entries)
-
-lemma set_entries_inject:
-  assumes sorted: "sorted t1" "sorted t2" 
-  shows "set (entries t1) = set (entries t2) \<longleftrightarrow> entries t1 = entries t2"
-proof -
-  from sorted have "distinct (map fst (entries t1))"
-    "distinct (map fst (entries t2))"
-    by (auto intro: distinct_entries)
-  with sorted show ?thesis
-    by (auto intro: map_sorted_distinct_set_unique sorted_entries simp add: distinct_map)
-qed
-
-lemma entries_eqI:
-  assumes sorted: "sorted t1" "sorted t2" 
-  assumes lookup: "lookup t1 = lookup t2"
-  shows "entries t1 = entries t2"
-proof -
-  from sorted lookup have "map_of (entries t1) = map_of (entries t2)"
-    by (simp add: map_of_entries)
-  with sorted have "set (entries t1) = set (entries t2)"
-    by (simp add: map_of_inject_set distinct_entries)
-  with sorted show ?thesis by (simp add: set_entries_inject)
+  have "RBT_Impl.Empty \<in> ?rbt" by simp
+  then show ?thesis ..
 qed
 
-lemma entries_lookup:
-  assumes "sorted t1" "sorted t2" 
-  shows "entries t1 = entries t2 \<longleftrightarrow> lookup t1 = lookup t2"
-  using assms by (auto intro: entries_eqI simp add: map_of_entries [symmetric])
-
-lemma lookup_from_in_tree: 
-  assumes "sorted t1" "sorted t2" 
-  and "\<And>v. (k\<Colon>'a\<Colon>linorder, v) \<in> set (entries t1) \<longleftrightarrow> (k, v) \<in> set (entries t2)" 
-  shows "lookup t1 k = lookup t2 k"
-proof -
-  from assms have "k \<in> dom (lookup t1) \<longleftrightarrow> k \<in> dom (lookup t2)"
-    by (simp add: keys_entries lookup_keys)
-  with assms show ?thesis by (auto simp add: lookup_in_tree [symmetric])
-qed
-
-
-subsubsection {* Red-black properties *}
-
-primrec color_of :: "('a, 'b) rbt \<Rightarrow> color"
-where
-  "color_of Empty = B"
-| "color_of (Branch c _ _ _ _) = c"
+lemma is_rbt_impl_of [simp, intro]:
+  "is_rbt (impl_of t)"
+  using impl_of [of t] by simp
 
-primrec bheight :: "('a,'b) rbt \<Rightarrow> nat"
-where
-  "bheight Empty = 0"
-| "bheight (Branch c lt k v rt) = (if c = B then Suc (bheight lt) else bheight lt)"
-
-primrec inv1 :: "('a, 'b) rbt \<Rightarrow> bool"
-where
-  "inv1 Empty = True"
-| "inv1 (Branch c lt k v rt) \<longleftrightarrow> inv1 lt \<and> inv1 rt \<and> (c = B \<or> color_of lt = B \<and> color_of rt = B)"
+lemma rbt_eq:
+  "t1 = t2 \<longleftrightarrow> impl_of t1 = impl_of t2"
+  by (simp add: impl_of_inject)
 
-primrec inv1l :: "('a, 'b) rbt \<Rightarrow> bool" -- {* Weaker version *}
-where
-  "inv1l Empty = True"
-| "inv1l (Branch c l k v r) = (inv1 l \<and> inv1 r)"
-lemma [simp]: "inv1 t \<Longrightarrow> inv1l t" by (cases t) simp+
-
-primrec inv2 :: "('a, 'b) rbt \<Rightarrow> bool"
-where
-  "inv2 Empty = True"
-| "inv2 (Branch c lt k v rt) = (inv2 lt \<and> inv2 rt \<and> bheight lt = bheight rt)"
-
-definition is_rbt :: "('a\<Colon>linorder, 'b) rbt \<Rightarrow> bool" where
-  "is_rbt t \<longleftrightarrow> inv1 t \<and> inv2 t \<and> color_of t = B \<and> sorted t"
-
-lemma is_rbt_sorted [simp]:
-  "is_rbt t \<Longrightarrow> sorted t" by (simp add: is_rbt_def)
-
-theorem Empty_is_rbt [simp]:
-  "is_rbt Empty" by (simp add: is_rbt_def)
+lemma [code abstype]:
+  "RBT (impl_of t) = t"
+  by (simp add: impl_of_inverse)
 
 
-subsection {* Insertion *}
-
-fun (* slow, due to massive case splitting *)
-  balance :: "('a,'b) rbt \<Rightarrow> 'a \<Rightarrow> 'b \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
-where
-  "balance (Branch R a w x b) s t (Branch R c y z d) = Branch R (Branch B a w x b) s t (Branch B c y z d)" |
-  "balance (Branch R (Branch R a w x b) s t c) y z d = Branch R (Branch B a w x b) s t (Branch B c y z d)" |
-  "balance (Branch R a w x (Branch R b s t c)) y z d = Branch R (Branch B a w x b) s t (Branch B c y z d)" |
-  "balance a w x (Branch R b s t (Branch R c y z d)) = Branch R (Branch B a w x b) s t (Branch B c y z d)" |
-  "balance a w x (Branch R (Branch R b s t c) y z d) = Branch R (Branch B a w x b) s t (Branch B c y z d)" |
-  "balance a s t b = Branch B a s t b"
-
-lemma balance_inv1: "\<lbrakk>inv1l l; inv1l r\<rbrakk> \<Longrightarrow> inv1 (balance l k v r)" 
-  by (induct l k v r rule: balance.induct) auto
-
-lemma balance_bheight: "bheight l = bheight r \<Longrightarrow> bheight (balance l k v r) = Suc (bheight l)"
-  by (induct l k v r rule: balance.induct) auto
-
-lemma balance_inv2: 
-  assumes "inv2 l" "inv2 r" "bheight l = bheight r"
-  shows "inv2 (balance l k v r)"
-  using assms
-  by (induct l k v r rule: balance.induct) auto
-
-lemma balance_tree_greater[simp]: "(v \<guillemotleft>| balance a k x b) = (v \<guillemotleft>| a \<and> v \<guillemotleft>| b \<and> v < k)" 
-  by (induct a k x b rule: balance.induct) auto
-
-lemma balance_tree_less[simp]: "(balance a k x b |\<guillemotleft> v) = (a |\<guillemotleft> v \<and> b |\<guillemotleft> v \<and> k < v)"
-  by (induct a k x b rule: balance.induct) auto
+subsection {* Primitive operations *}
 
-lemma balance_sorted: 
-  fixes k :: "'a::linorder"
-  assumes "sorted l" "sorted r" "l |\<guillemotleft> k" "k \<guillemotleft>| r"
-  shows "sorted (balance l k v r)"
-using assms proof (induct l k v r rule: balance.induct)
-  case ("2_2" a x w b y t c z s va vb vd vc)
-  hence "y < z \<and> z \<guillemotleft>| Branch B va vb vd vc" 
-    by (auto simp add: tree_ord_props)
-  hence "tree_greater y (Branch B va vb vd vc)" by (blast dest: tree_greater_trans)
-  with "2_2" show ?case by simp
-next
-  case ("3_2" va vb vd vc x w b y s c z)
-  from "3_2" have "x < y \<and> tree_less x (Branch B va vb vd vc)" 
-    by simp
-  hence "tree_less y (Branch B va vb vd vc)" by (blast dest: tree_less_trans)
-  with "3_2" show ?case by simp
-next
-  case ("3_3" x w b y s c z t va vb vd vc)
-  from "3_3" have "y < z \<and> tree_greater z (Branch B va vb vd vc)" by simp
-  hence "tree_greater y (Branch B va vb vd vc)" by (blast dest: tree_greater_trans)
-  with "3_3" show ?case by simp
-next
-  case ("3_4" vd ve vg vf x w b y s c z t va vb vii vc)
-  hence "x < y \<and> tree_less x (Branch B vd ve vg vf)" by simp
-  hence 1: "tree_less y (Branch B vd ve vg vf)" by (blast dest: tree_less_trans)
-  from "3_4" have "y < z \<and> tree_greater z (Branch B va vb vii vc)" by simp
-  hence "tree_greater y (Branch B va vb vii vc)" by (blast dest: tree_greater_trans)
-  with 1 "3_4" show ?case by simp
-next
-  case ("4_2" va vb vd vc x w b y s c z t dd)
-  hence "x < y \<and> tree_less x (Branch B va vb vd vc)" by simp
-  hence "tree_less y (Branch B va vb vd vc)" by (blast dest: tree_less_trans)
-  with "4_2" show ?case by simp
-next
-  case ("5_2" x w b y s c z t va vb vd vc)
-  hence "y < z \<and> tree_greater z (Branch B va vb vd vc)" by simp
-  hence "tree_greater y (Branch B va vb vd vc)" by (blast dest: tree_greater_trans)
-  with "5_2" show ?case by simp
-next
-  case ("5_3" va vb vd vc x w b y s c z t)
-  hence "x < y \<and> tree_less x (Branch B va vb vd vc)" by simp
-  hence "tree_less y (Branch B va vb vd vc)" by (blast dest: tree_less_trans)
-  with "5_3" show ?case by simp
-next
-  case ("5_4" va vb vg vc x w b y s c z t vd ve vii vf)
-  hence "x < y \<and> tree_less x (Branch B va vb vg vc)" by simp
-  hence 1: "tree_less y (Branch B va vb vg vc)" by (blast dest: tree_less_trans)
-  from "5_4" have "y < z \<and> tree_greater z (Branch B vd ve vii vf)" by simp
-  hence "tree_greater y (Branch B vd ve vii vf)" by (blast dest: tree_greater_trans)
-  with 1 "5_4" show ?case by simp
-qed simp+
-
-lemma entries_balance [simp]:
-  "entries (balance l k v r) = entries l @ (k, v) # entries r"
-  by (induct l k v r rule: balance.induct) auto
-
-lemma keys_balance [simp]: 
-  "keys (balance l k v r) = keys l @ k # keys r"
-  by (simp add: keys_def)
+definition lookup :: "('a\<Colon>linorder, 'b) rbt \<Rightarrow> 'a \<rightharpoonup> 'b" where
+  [code]: "lookup t = RBT_Impl.lookup (impl_of t)"
 
-lemma balance_in_tree:  
-  "entry_in_tree k x (balance l v y r) \<longleftrightarrow> entry_in_tree k x l \<or> k = v \<and> x = y \<or> entry_in_tree k x r"
-  by (auto simp add: keys_def)
-
-lemma lookup_balance[simp]: 
-fixes k :: "'a::linorder"
-assumes "sorted l" "sorted r" "l |\<guillemotleft> k" "k \<guillemotleft>| r"
-shows "lookup (balance l k v r) x = lookup (Branch B l k v r) x"
-by (rule lookup_from_in_tree) (auto simp:assms balance_in_tree balance_sorted)
-
-primrec paint :: "color \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
-where
-  "paint c Empty = Empty"
-| "paint c (Branch _ l k v r) = Branch c l k v r"
-
-lemma paint_inv1l[simp]: "inv1l t \<Longrightarrow> inv1l (paint c t)" by (cases t) auto
-lemma paint_inv1[simp]: "inv1l t \<Longrightarrow> inv1 (paint B t)" by (cases t) auto
-lemma paint_inv2[simp]: "inv2 t \<Longrightarrow> inv2 (paint c t)" by (cases t) auto
-lemma paint_color_of[simp]: "color_of (paint B t) = B" by (cases t) auto
-lemma paint_sorted[simp]: "sorted t \<Longrightarrow> sorted (paint c t)" by (cases t) auto
-lemma paint_in_tree[simp]: "entry_in_tree k x (paint c t) = entry_in_tree k x t" by (cases t) auto
-lemma paint_lookup[simp]: "lookup (paint c t) = lookup t" by (rule ext) (cases t, auto)
-lemma paint_tree_greater[simp]: "(v \<guillemotleft>| paint c t) = (v \<guillemotleft>| t)" by (cases t) auto
-lemma paint_tree_less[simp]: "(paint c t |\<guillemotleft> v) = (t |\<guillemotleft> v)" by (cases t) auto
-
-fun
-  ins :: "('a\<Colon>linorder \<Rightarrow> 'b \<Rightarrow> 'b \<Rightarrow> 'b) \<Rightarrow> 'a \<Rightarrow> 'b \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
-where
-  "ins f k v Empty = Branch R Empty k v Empty" |
-  "ins f k v (Branch B l x y r) = (if k < x then balance (ins f k v l) x y r
-                               else if k > x then balance l x y (ins f k v r)
-                               else Branch B l x (f k y v) r)" |
-  "ins f k v (Branch R l x y r) = (if k < x then Branch R (ins f k v l) x y r
-                               else if k > x then Branch R l x y (ins f k v r)
-                               else Branch R l x (f k y v) r)"
-
-lemma ins_inv1_inv2: 
-  assumes "inv1 t" "inv2 t"
-  shows "inv2 (ins f k x t)" "bheight (ins f k x t) = bheight t" 
-  "color_of t = B \<Longrightarrow> inv1 (ins f k x t)" "inv1l (ins f k x t)"
-  using assms
-  by (induct f k x t rule: ins.induct) (auto simp: balance_inv1 balance_inv2 balance_bheight)
+definition empty :: "('a\<Colon>linorder, 'b) rbt" where
+  "empty = RBT RBT_Impl.Empty"
 
-lemma ins_tree_greater[simp]: "(v \<guillemotleft>| ins f k x t) = (v \<guillemotleft>| t \<and> k > v)"
-  by (induct f k x t rule: ins.induct) auto
-lemma ins_tree_less[simp]: "(ins f k x t |\<guillemotleft> v) = (t |\<guillemotleft> v \<and> k < v)"
-  by (induct f k x t rule: ins.induct) auto
-lemma ins_sorted[simp]: "sorted t \<Longrightarrow> sorted (ins f k x t)"
-  by (induct f k x t rule: ins.induct) (auto simp: balance_sorted)
-
-lemma keys_ins: "set (keys (ins f k v t)) = { k } \<union> set (keys t)"
-  by (induct f k v t rule: ins.induct) auto
-
-lemma lookup_ins: 
-  fixes k :: "'a::linorder"
-  assumes "sorted t"
-  shows "lookup (ins f k v t) x = ((lookup t)(k |-> case lookup t k of None \<Rightarrow> v 
-                                                       | Some w \<Rightarrow> f k w v)) x"
-using assms by (induct f k v t rule: ins.induct) auto
-
-definition
-  insert_with_key :: "('a\<Colon>linorder \<Rightarrow> 'b \<Rightarrow> 'b \<Rightarrow> 'b) \<Rightarrow> 'a \<Rightarrow> 'b \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
-where
-  "insert_with_key f k v t = paint B (ins f k v t)"
-
-lemma insertwk_sorted: "sorted t \<Longrightarrow> sorted (insert_with_key f k x t)"
-  by (auto simp: insert_with_key_def)
-
-theorem insertwk_is_rbt: 
-  assumes inv: "is_rbt t" 
-  shows "is_rbt (insert_with_key f k x t)"
-using assms
-unfolding insert_with_key_def is_rbt_def
-by (auto simp: ins_inv1_inv2)
-
-lemma lookup_insertwk: 
-  assumes "sorted t"
-  shows "lookup (insert_with_key f k v t) x = ((lookup t)(k |-> case lookup t k of None \<Rightarrow> v 
-                                                       | Some w \<Rightarrow> f k w v)) x"
-unfolding insert_with_key_def using assms
-by (simp add:lookup_ins)
-
-definition
-  insertw_def: "insert_with f = insert_with_key (\<lambda>_. f)"
-
-lemma insertw_sorted: "sorted t \<Longrightarrow> sorted (insert_with f k v t)" by (simp add: insertwk_sorted insertw_def)
-theorem insertw_is_rbt: "is_rbt t \<Longrightarrow> is_rbt (insert_with f k v t)" by (simp add: insertwk_is_rbt insertw_def)
-
-lemma lookup_insertw:
-  assumes "is_rbt t"
-  shows "lookup (insert_with f k v t) = (lookup t)(k \<mapsto> (if k:dom (lookup t) then f (the (lookup t k)) v else v))"
-using assms
-unfolding insertw_def
-by (rule_tac ext) (cases "lookup t k", auto simp:lookup_insertwk dom_def)
+lemma impl_of_empty [code abstract]:
+  "impl_of empty = RBT_Impl.Empty"
+  by (simp add: empty_def RBT_inverse)
 
 definition insert :: "'a\<Colon>linorder \<Rightarrow> 'b \<Rightarrow> ('a, 'b) rbt \<Rightarrow> ('a, 'b) rbt" where
-  "insert = insert_with_key (\<lambda>_ _ nv. nv)"
+  "insert k v t = RBT (RBT_Impl.insert k v (impl_of t))"
+
+lemma impl_of_insert [code abstract]:
+  "impl_of (insert k v t) = RBT_Impl.insert k v (impl_of t)"
+  by (simp add: insert_def RBT_inverse)
+
+definition delete :: "'a\<Colon>linorder \<Rightarrow> ('a, 'b) rbt \<Rightarrow> ('a, 'b) rbt" where
+  "delete k t = RBT (RBT_Impl.delete k (impl_of t))"
 
-lemma insert_sorted: "sorted t \<Longrightarrow> sorted (insert k v t)" by (simp add: insertwk_sorted insert_def)
-theorem insert_is_rbt [simp]: "is_rbt t \<Longrightarrow> is_rbt (insert k v t)" by (simp add: insertwk_is_rbt insert_def)
+lemma impl_of_delete [code abstract]:
+  "impl_of (delete k t) = RBT_Impl.delete k (impl_of t)"
+  by (simp add: delete_def RBT_inverse)
+
+definition entries :: "('a\<Colon>linorder, 'b) rbt \<Rightarrow> ('a \<times> 'b) list" where
+  [code]: "entries t = RBT_Impl.entries (impl_of t)"
+
+definition keys :: "('a\<Colon>linorder, 'b) rbt \<Rightarrow> 'a list" where
+  [code]: "keys t = RBT_Impl.keys (impl_of t)"
+
+definition bulkload :: "('a\<Colon>linorder \<times> 'b) list \<Rightarrow> ('a, 'b) rbt" where
+  "bulkload xs = RBT (RBT_Impl.bulkload xs)"
 
-lemma lookup_insert: 
-  assumes "is_rbt t"
-  shows "lookup (insert k v t) = (lookup t)(k\<mapsto>v)"
-unfolding insert_def
-using assms
-by (rule_tac ext) (simp add: lookup_insertwk split:option.split)
+lemma impl_of_bulkload [code abstract]:
+  "impl_of (bulkload xs) = RBT_Impl.bulkload xs"
+  by (simp add: bulkload_def RBT_inverse)
+
+definition map_entry :: "'a \<Rightarrow> ('b \<Rightarrow> 'b) \<Rightarrow> ('a\<Colon>linorder, 'b) rbt \<Rightarrow> ('a, 'b) rbt" where
+  "map_entry k f t = RBT (RBT_Impl.map_entry k f (impl_of t))"
+
+lemma impl_of_map_entry [code abstract]:
+  "impl_of (map_entry k f t) = RBT_Impl.map_entry k f (impl_of t)"
+  by (simp add: map_entry_def RBT_inverse)
+
+definition map :: "('a \<Rightarrow> 'b \<Rightarrow> 'b) \<Rightarrow> ('a\<Colon>linorder, 'b) rbt \<Rightarrow> ('a, 'b) rbt" where
+  "map f t = RBT (RBT_Impl.map f (impl_of t))"
+
+lemma impl_of_map [code abstract]:
+  "impl_of (map f t) = RBT_Impl.map f (impl_of t)"
+  by (simp add: map_def RBT_inverse)
+
+definition fold :: "('a \<Rightarrow> 'b \<Rightarrow> 'c \<Rightarrow> 'c) \<Rightarrow> ('a\<Colon>linorder, 'b) rbt \<Rightarrow> 'c \<Rightarrow> 'c" where
+  [code]: "fold f t = RBT_Impl.fold f (impl_of t)"
+
+
+subsection {* Derived operations *}
+
+definition is_empty :: "('a\<Colon>linorder, 'b) rbt \<Rightarrow> bool" where
+  [code]: "is_empty t = (case impl_of t of RBT_Impl.Empty \<Rightarrow> True | _ \<Rightarrow> False)"
 
 
-subsection {* Deletion *}
-
-lemma bheight_paintR'[simp]: "color_of t = B \<Longrightarrow> bheight (paint R t) = bheight t - 1"
-by (cases t rule: rbt_cases) auto
-
-fun
-  balance_left :: "('a,'b) rbt \<Rightarrow> 'a \<Rightarrow> 'b \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
-where
-  "balance_left (Branch R a k x b) s y c = Branch R (Branch B a k x b) s y c" |
-  "balance_left bl k x (Branch B a s y b) = balance bl k x (Branch R a s y b)" |
-  "balance_left bl k x (Branch R (Branch B a s y b) t z c) = Branch R (Branch B bl k x a) s y (balance b t z (paint R c))" |
-  "balance_left t k x s = Empty"
+subsection {* Abstract lookup properties *}
 
-lemma balance_left_inv2_with_inv1:
-  assumes "inv2 lt" "inv2 rt" "bheight lt + 1 = bheight rt" "inv1 rt"
-  shows "bheight (balance_left lt k v rt) = bheight lt + 1"
-  and   "inv2 (balance_left lt k v rt)"
-using assms 
-by (induct lt k v rt rule: balance_left.induct) (auto simp: balance_inv2 balance_bheight)
-
-lemma balance_left_inv2_app: 
-  assumes "inv2 lt" "inv2 rt" "bheight lt + 1 = bheight rt" "color_of rt = B"
-  shows "inv2 (balance_left lt k v rt)" 
-        "bheight (balance_left lt k v rt) = bheight rt"
-using assms 
-by (induct lt k v rt rule: balance_left.induct) (auto simp add: balance_inv2 balance_bheight)+ 
-
-lemma balance_left_inv1: "\<lbrakk>inv1l a; inv1 b; color_of b = B\<rbrakk> \<Longrightarrow> inv1 (balance_left a k x b)"
-  by (induct a k x b rule: balance_left.induct) (simp add: balance_inv1)+
-
-lemma balance_left_inv1l: "\<lbrakk> inv1l lt; inv1 rt \<rbrakk> \<Longrightarrow> inv1l (balance_left lt k x rt)"
-by (induct lt k x rt rule: balance_left.induct) (auto simp: balance_inv1)
+lemma lookup_RBT:
+  "is_rbt t \<Longrightarrow> lookup (RBT t) = RBT_Impl.lookup t"
+  by (simp add: lookup_def RBT_inverse)
 
-lemma balance_left_sorted: "\<lbrakk> sorted l; sorted r; tree_less k l; tree_greater k r \<rbrakk> \<Longrightarrow> sorted (balance_left l k v r)"
-apply (induct l k v r rule: balance_left.induct)
-apply (auto simp: balance_sorted)
-apply (unfold tree_greater_prop tree_less_prop)
-by force+
-
-lemma balance_left_tree_greater: 
-  fixes k :: "'a::order"
-  assumes "k \<guillemotleft>| a" "k \<guillemotleft>| b" "k < x" 
-  shows "k \<guillemotleft>| balance_left a x t b"
-using assms 
-by (induct a x t b rule: balance_left.induct) auto
-
-lemma balance_left_tree_less: 
-  fixes k :: "'a::order"
-  assumes "a |\<guillemotleft> k" "b |\<guillemotleft> k" "x < k" 
-  shows "balance_left a x t b |\<guillemotleft> k"
-using assms
-by (induct a x t b rule: balance_left.induct) auto
+lemma lookup_impl_of:
+  "RBT_Impl.lookup (impl_of t) = lookup t"
+  by (simp add: lookup_def)
 
-lemma balance_left_in_tree: 
-  assumes "inv1l l" "inv1 r" "bheight l + 1 = bheight r"
-  shows "entry_in_tree k v (balance_left l a b r) = (entry_in_tree k v l \<or> k = a \<and> v = b \<or> entry_in_tree k v r)"
-using assms 
-by (induct l k v r rule: balance_left.induct) (auto simp: balance_in_tree)
-
-fun
-  balance_right :: "('a,'b) rbt \<Rightarrow> 'a \<Rightarrow> 'b \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
-where
-  "balance_right a k x (Branch R b s y c) = Branch R a k x (Branch B b s y c)" |
-  "balance_right (Branch B a k x b) s y bl = balance (Branch R a k x b) s y bl" |
-  "balance_right (Branch R a k x (Branch B b s y c)) t z bl = Branch R (balance (paint R a) k x b) s y (Branch B c t z bl)" |
-  "balance_right t k x s = Empty"
-
-lemma balance_right_inv2_with_inv1:
-  assumes "inv2 lt" "inv2 rt" "bheight lt = bheight rt + 1" "inv1 lt"
-  shows "inv2 (balance_right lt k v rt) \<and> bheight (balance_right lt k v rt) = bheight lt"
-using assms
-by (induct lt k v rt rule: balance_right.induct) (auto simp: balance_inv2 balance_bheight)
+lemma entries_impl_of:
+  "RBT_Impl.entries (impl_of t) = entries t"
+  by (simp add: entries_def)
 
-lemma balance_right_inv1: "\<lbrakk>inv1 a; inv1l b; color_of a = B\<rbrakk> \<Longrightarrow> inv1 (balance_right a k x b)"
-by (induct a k x b rule: balance_right.induct) (simp add: balance_inv1)+
-
-lemma balance_right_inv1l: "\<lbrakk> inv1 lt; inv1l rt \<rbrakk> \<Longrightarrow>inv1l (balance_right lt k x rt)"
-by (induct lt k x rt rule: balance_right.induct) (auto simp: balance_inv1)
-
-lemma balance_right_sorted: "\<lbrakk> sorted l; sorted r; tree_less k l; tree_greater k r \<rbrakk> \<Longrightarrow> sorted (balance_right l k v r)"
-apply (induct l k v r rule: balance_right.induct)
-apply (auto simp:balance_sorted)
-apply (unfold tree_less_prop tree_greater_prop)
-by force+
+lemma keys_impl_of:
+  "RBT_Impl.keys (impl_of t) = keys t"
+  by (simp add: keys_def)
 
-lemma balance_right_tree_greater: 
-  fixes k :: "'a::order"
-  assumes "k \<guillemotleft>| a" "k \<guillemotleft>| b" "k < x" 
-  shows "k \<guillemotleft>| balance_right a x t b"
-using assms by (induct a x t b rule: balance_right.induct) auto
-
-lemma balance_right_tree_less: 
-  fixes k :: "'a::order"
-  assumes "a |\<guillemotleft> k" "b |\<guillemotleft> k" "x < k" 
-  shows "balance_right a x t b |\<guillemotleft> k"
-using assms by (induct a x t b rule: balance_right.induct) auto
-
-lemma balance_right_in_tree:
-  assumes "inv1 l" "inv1l r" "bheight l = bheight r + 1" "inv2 l" "inv2 r"
-  shows "entry_in_tree x y (balance_right l k v r) = (entry_in_tree x y l \<or> x = k \<and> y = v \<or> entry_in_tree x y r)"
-using assms by (induct l k v r rule: balance_right.induct) (auto simp: balance_in_tree)
+lemma lookup_empty [simp]:
+  "lookup empty = Map.empty"
+  by (simp add: empty_def lookup_RBT expand_fun_eq)
 
-fun
-  combine :: "('a,'b) rbt \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
-where
-  "combine Empty x = x" 
-| "combine x Empty = x" 
-| "combine (Branch R a k x b) (Branch R c s y d) = (case (combine b c) of
-                                      Branch R b2 t z c2 \<Rightarrow> (Branch R (Branch R a k x b2) t z (Branch R c2 s y d)) |
-                                      bc \<Rightarrow> Branch R a k x (Branch R bc s y d))" 
-| "combine (Branch B a k x b) (Branch B c s y d) = (case (combine b c) of
-                                      Branch R b2 t z c2 \<Rightarrow> Branch R (Branch B a k x b2) t z (Branch B c2 s y d) |
-                                      bc \<Rightarrow> balance_left a k x (Branch B bc s y d))" 
-| "combine a (Branch R b k x c) = Branch R (combine a b) k x c" 
-| "combine (Branch R a k x b) c = Branch R a k x (combine b c)" 
-
-lemma combine_inv2:
-  assumes "inv2 lt" "inv2 rt" "bheight lt = bheight rt"
-  shows "bheight (combine lt rt) = bheight lt" "inv2 (combine lt rt)"
-using assms 
-by (induct lt rt rule: combine.induct) 
-   (auto simp: balance_left_inv2_app split: rbt.splits color.splits)
+lemma lookup_insert [simp]:
+  "lookup (insert k v t) = (lookup t)(k \<mapsto> v)"
+  by (simp add: insert_def lookup_RBT lookup_insert lookup_impl_of)
 
-lemma combine_inv1: 
-  assumes "inv1 lt" "inv1 rt"
-  shows "color_of lt = B \<Longrightarrow> color_of rt = B \<Longrightarrow> inv1 (combine lt rt)"
-         "inv1l (combine lt rt)"
-using assms 
-by (induct lt rt rule: combine.induct)
-   (auto simp: balance_left_inv1 split: rbt.splits color.splits)
+lemma lookup_delete [simp]:
+  "lookup (delete k t) = (lookup t)(k := None)"
+  by (simp add: delete_def lookup_RBT RBT_Impl.lookup_delete lookup_impl_of restrict_complement_singleton_eq)
 
-lemma combine_tree_greater[simp]: 
-  fixes k :: "'a::linorder"
-  assumes "k \<guillemotleft>| l" "k \<guillemotleft>| r" 
-  shows "k \<guillemotleft>| combine l r"
-using assms 
-by (induct l r rule: combine.induct)
-   (auto simp: balance_left_tree_greater split:rbt.splits color.splits)
-
-lemma combine_tree_less[simp]: 
-  fixes k :: "'a::linorder"
-  assumes "l |\<guillemotleft> k" "r |\<guillemotleft> k" 
-  shows "combine l r |\<guillemotleft> k"
-using assms 
-by (induct l r rule: combine.induct)
-   (auto simp: balance_left_tree_less split:rbt.splits color.splits)
+lemma map_of_entries [simp]:
+  "map_of (entries t) = lookup t"
+  by (simp add: entries_def map_of_entries lookup_impl_of)
 
-lemma combine_sorted: 
-  fixes k :: "'a::linorder"
-  assumes "sorted l" "sorted r" "l |\<guillemotleft> k" "k \<guillemotleft>| r"
-  shows "sorted (combine l r)"
-using assms proof (induct l r rule: combine.induct)
-  case (3 a x v b c y w d)
-  hence ineqs: "a |\<guillemotleft> x" "x \<guillemotleft>| b" "b |\<guillemotleft> k" "k \<guillemotleft>| c" "c |\<guillemotleft> y" "y \<guillemotleft>| d"
-    by auto
-  with 3
-  show ?case
-    by (cases "combine b c" rule: rbt_cases)
-      (auto, (metis combine_tree_greater combine_tree_less ineqs ineqs tree_less_simps(2) tree_greater_simps(2) tree_greater_trans tree_less_trans)+)
-next
-  case (4 a x v b c y w d)
-  hence "x < k \<and> tree_greater k c" by simp
-  hence "tree_greater x c" by (blast dest: tree_greater_trans)
-  with 4 have 2: "tree_greater x (combine b c)" by (simp add: combine_tree_greater)
-  from 4 have "k < y \<and> tree_less k b" by simp
-  hence "tree_less y b" by (blast dest: tree_less_trans)
-  with 4 have 3: "tree_less y (combine b c)" by (simp add: combine_tree_less)
-  show ?case
-  proof (cases "combine b c" rule: rbt_cases)
-    case Empty
-    from 4 have "x < y \<and> tree_greater y d" by auto
-    hence "tree_greater x d" by (blast dest: tree_greater_trans)
-    with 4 Empty have "sorted a" and "sorted (Branch B Empty y w d)" and "tree_less x a" and "tree_greater x (Branch B Empty y w d)" by auto
-    with Empty show ?thesis by (simp add: balance_left_sorted)
-  next
-    case (Red lta va ka rta)
-    with 2 4 have "x < va \<and> tree_less x a" by simp
-    hence 5: "tree_less va a" by (blast dest: tree_less_trans)
-    from Red 3 4 have "va < y \<and> tree_greater y d" by simp
-    hence "tree_greater va d" by (blast dest: tree_greater_trans)
-    with Red 2 3 4 5 show ?thesis by simp
-  next
-    case (Black lta va ka rta)
-    from 4 have "x < y \<and> tree_greater y d" by auto
-    hence "tree_greater x d" by (blast dest: tree_greater_trans)
-    with Black 2 3 4 have "sorted a" and "sorted (Branch B (combine b c) y w d)" and "tree_less x a" and "tree_greater x (Branch B (combine b c) y w d)" by auto
-    with Black show ?thesis by (simp add: balance_left_sorted)
-  qed
-next
-  case (5 va vb vd vc b x w c)
-  hence "k < x \<and> tree_less k (Branch B va vb vd vc)" by simp
-  hence "tree_less x (Branch B va vb vd vc)" by (blast dest: tree_less_trans)
-  with 5 show ?case by (simp add: combine_tree_less)
-next
-  case (6 a x v b va vb vd vc)
-  hence "x < k \<and> tree_greater k (Branch B va vb vd vc)" by simp
-  hence "tree_greater x (Branch B va vb vd vc)" by (blast dest: tree_greater_trans)
-  with 6 show ?case by (simp add: combine_tree_greater)
-qed simp+
+lemma entries_lookup:
+  "entries t1 = entries t2 \<longleftrightarrow> lookup t1 = lookup t2"
+  by (simp add: entries_def lookup_def entries_lookup)
+
+lemma lookup_bulkload [simp]:
+  "lookup (bulkload xs) = map_of xs"
+  by (simp add: bulkload_def lookup_RBT RBT_Impl.lookup_bulkload)
 
-lemma combine_in_tree: 
-  assumes "inv2 l" "inv2 r" "bheight l = bheight r" "inv1 l" "inv1 r"
-  shows "entry_in_tree k v (combine l r) = (entry_in_tree k v l \<or> entry_in_tree k v r)"
-using assms 
-proof (induct l r rule: combine.induct)
-  case (4 _ _ _ b c)
-  hence a: "bheight (combine b c) = bheight b" by (simp add: combine_inv2)
-  from 4 have b: "inv1l (combine b c)" by (simp add: combine_inv1)
+lemma lookup_map_entry [simp]:
+  "lookup (map_entry k f t) = (lookup t)(k := Option.map f (lookup t k))"
+  by (simp add: map_entry_def lookup_RBT lookup_map_entry lookup_impl_of)
 
-  show ?case
-  proof (cases "combine b c" rule: rbt_cases)
-    case Empty
-    with 4 a show ?thesis by (auto simp: balance_left_in_tree)
-  next
-    case (Red lta ka va rta)
-    with 4 show ?thesis by auto
-  next
-    case (Black lta ka va rta)
-    with a b 4  show ?thesis by (auto simp: balance_left_in_tree)
-  qed 
-qed (auto split: rbt.splits color.splits)
+lemma lookup_map [simp]:
+  "lookup (map f t) k = Option.map (f k) (lookup t k)"
+  by (simp add: map_def lookup_RBT lookup_map lookup_impl_of)
 
-fun
-  del_from_left :: "('a\<Colon>linorder) \<Rightarrow> ('a,'b) rbt \<Rightarrow> 'a \<Rightarrow> 'b \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt" and
-  del_from_right :: "('a\<Colon>linorder) \<Rightarrow> ('a,'b) rbt \<Rightarrow> 'a \<Rightarrow> 'b \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt" and
-  del :: "('a\<Colon>linorder) \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
-where
-  "del x Empty = Empty" |
-  "del x (Branch c a y s b) = (if x < y then del_from_left x a y s b else (if x > y then del_from_right x a y s b else combine a b))" |
-  "del_from_left x (Branch B lt z v rt) y s b = balance_left (del x (Branch B lt z v rt)) y s b" |
-  "del_from_left x a y s b = Branch R (del x a) y s b" |
-  "del_from_right x a y s (Branch B lt z v rt) = balance_right a y s (del x (Branch B lt z v rt))" | 
-  "del_from_right x a y s b = Branch R a y s (del x b)"
+lemma fold_fold:
+  "fold f t = (\<lambda>s. foldl (\<lambda>s (k, v). f k v s) s (entries t))"
+  by (simp add: fold_def expand_fun_eq RBT_Impl.fold_def entries_impl_of)
 
-lemma 
-  assumes "inv2 lt" "inv1 lt"
-  shows
-  "\<lbrakk>inv2 rt; bheight lt = bheight rt; inv1 rt\<rbrakk> \<Longrightarrow>
-  inv2 (del_from_left x lt k v rt) \<and> bheight (del_from_left x lt k v rt) = bheight lt \<and> (color_of lt = B \<and> color_of rt = B \<and> inv1 (del_from_left x lt k v rt) \<or> (color_of lt \<noteq> B \<or> color_of rt \<noteq> B) \<and> inv1l (del_from_left x lt k v rt))"
-  and "\<lbrakk>inv2 rt; bheight lt = bheight rt; inv1 rt\<rbrakk> \<Longrightarrow>
-  inv2 (del_from_right x lt k v rt) \<and> bheight (del_from_right x lt k v rt) = bheight lt \<and> (color_of lt = B \<and> color_of rt = B \<and> inv1 (del_from_right x lt k v rt) \<or> (color_of lt \<noteq> B \<or> color_of rt \<noteq> B) \<and> inv1l (del_from_right x lt k v rt))"
-  and del_inv1_inv2: "inv2 (del x lt) \<and> (color_of lt = R \<and> bheight (del x lt) = bheight lt \<and> inv1 (del x lt) 
-  \<or> color_of lt = B \<and> bheight (del x lt) = bheight lt - 1 \<and> inv1l (del x lt))"
-using assms
-proof (induct x lt k v rt and x lt k v rt and x lt rule: del_from_left_del_from_right_del.induct)
-case (2 y c _ y')
-  have "y = y' \<or> y < y' \<or> y > y'" by auto
-  thus ?case proof (elim disjE)
-    assume "y = y'"
-    with 2 show ?thesis by (cases c) (simp add: combine_inv2 combine_inv1)+
-  next
-    assume "y < y'"
-    with 2 show ?thesis by (cases c) auto
-  next
-    assume "y' < y"
-    with 2 show ?thesis by (cases c) auto
-  qed
-next
-  case (3 y lt z v rta y' ss bb) 
-  thus ?case by (cases "color_of (Branch B lt z v rta) = B \<and> color_of bb = B") (simp add: balance_left_inv2_with_inv1 balance_left_inv1 balance_left_inv1l)+
-next
-  case (5 y a y' ss lt z v rta)
-  thus ?case by (cases "color_of a = B \<and> color_of (Branch B lt z v rta) = B") (simp add: balance_right_inv2_with_inv1 balance_right_inv1 balance_right_inv1l)+
-next
-  case ("6_1" y a y' ss) thus ?case by (cases "color_of a = B \<and> color_of Empty = B") simp+
-qed auto
+lemma is_empty_empty [simp]:
+  "is_empty t \<longleftrightarrow> t = empty"
+  by (simp add: rbt_eq is_empty_def impl_of_empty split: rbt.split)
 
-lemma 
-  del_from_left_tree_less: "\<lbrakk>tree_less v lt; tree_less v rt; k < v\<rbrakk> \<Longrightarrow> tree_less v (del_from_left x lt k y rt)"
-  and del_from_right_tree_less: "\<lbrakk>tree_less v lt; tree_less v rt; k < v\<rbrakk> \<Longrightarrow> tree_less v (del_from_right x lt k y rt)"
-  and del_tree_less: "tree_less v lt \<Longrightarrow> tree_less v (del x lt)"
-by (induct x lt k y rt and x lt k y rt and x lt rule: del_from_left_del_from_right_del.induct) 
-   (auto simp: balance_left_tree_less balance_right_tree_less)
-
-lemma del_from_left_tree_greater: "\<lbrakk>tree_greater v lt; tree_greater v rt; k > v\<rbrakk> \<Longrightarrow> tree_greater v (del_from_left x lt k y rt)"
-  and del_from_right_tree_greater: "\<lbrakk>tree_greater v lt; tree_greater v rt; k > v\<rbrakk> \<Longrightarrow> tree_greater v (del_from_right x lt k y rt)"
-  and del_tree_greater: "tree_greater v lt \<Longrightarrow> tree_greater v (del x lt)"
-by (induct x lt k y rt and x lt k y rt and x lt rule: del_from_left_del_from_right_del.induct)
-   (auto simp: balance_left_tree_greater balance_right_tree_greater)
-
-lemma "\<lbrakk>sorted lt; sorted rt; tree_less k lt; tree_greater k rt\<rbrakk> \<Longrightarrow> sorted (del_from_left x lt k y rt)"
-  and "\<lbrakk>sorted lt; sorted rt; tree_less k lt; tree_greater k rt\<rbrakk> \<Longrightarrow> sorted (del_from_right x lt k y rt)"
-  and del_sorted: "sorted lt \<Longrightarrow> sorted (del x lt)"
-proof (induct x lt k y rt and x lt k y rt and x lt rule: del_from_left_del_from_right_del.induct)
-  case (3 x lta zz v rta yy ss bb)
-  from 3 have "tree_less yy (Branch B lta zz v rta)" by simp
-  hence "tree_less yy (del x (Branch B lta zz v rta))" by (rule del_tree_less)
-  with 3 show ?case by (simp add: balance_left_sorted)
-next
-  case ("4_2" x vaa vbb vdd vc yy ss bb)
-  hence "tree_less yy (Branch R vaa vbb vdd vc)" by simp
-  hence "tree_less yy (del x (Branch R vaa vbb vdd vc))" by (rule del_tree_less)
-  with "4_2" show ?case by simp
-next
-  case (5 x aa yy ss lta zz v rta) 
-  hence "tree_greater yy (Branch B lta zz v rta)" by simp
-  hence "tree_greater yy (del x (Branch B lta zz v rta))" by (rule del_tree_greater)
-  with 5 show ?case by (simp add: balance_right_sorted)
-next
-  case ("6_2" x aa yy ss vaa vbb vdd vc)
-  hence "tree_greater yy (Branch R vaa vbb vdd vc)" by simp
-  hence "tree_greater yy (del x (Branch R vaa vbb vdd vc))" by (rule del_tree_greater)
-  with "6_2" show ?case by simp
-qed (auto simp: combine_sorted)
+lemma RBT_lookup_empty [simp]: (*FIXME*)
+  "RBT_Impl.lookup t = Map.empty \<longleftrightarrow> t = RBT_Impl.Empty"
+  by (cases t) (auto simp add: expand_fun_eq)
 
-lemma "\<lbrakk>sorted lt; sorted rt; tree_less kt lt; tree_greater kt rt; inv1 lt; inv1 rt; inv2 lt; inv2 rt; bheight lt = bheight rt; x < kt\<rbrakk> \<Longrightarrow> entry_in_tree k v (del_from_left x lt kt y rt) = (False \<or> (x \<noteq> k \<and> entry_in_tree k v (Branch c lt kt y rt)))"
-  and "\<lbrakk>sorted lt; sorted rt; tree_less kt lt; tree_greater kt rt; inv1 lt; inv1 rt; inv2 lt; inv2 rt; bheight lt = bheight rt; x > kt\<rbrakk> \<Longrightarrow> entry_in_tree k v (del_from_right x lt kt y rt) = (False \<or> (x \<noteq> k \<and> entry_in_tree k v (Branch c lt kt y rt)))"
-  and del_in_tree: "\<lbrakk>sorted t; inv1 t; inv2 t\<rbrakk> \<Longrightarrow> entry_in_tree k v (del x t) = (False \<or> (x \<noteq> k \<and> entry_in_tree k v t))"
-proof (induct x lt kt y rt and x lt kt y rt and x t rule: del_from_left_del_from_right_del.induct)
-  case (2 xx c aa yy ss bb)
-  have "xx = yy \<or> xx < yy \<or> xx > yy" by auto
-  from this 2 show ?case proof (elim disjE)
-    assume "xx = yy"
-    with 2 show ?thesis proof (cases "xx = k")
-      case True
-      from 2 `xx = yy` `xx = k` have "sorted (Branch c aa yy ss bb) \<and> k = yy" by simp
-      hence "\<not> entry_in_tree k v aa" "\<not> entry_in_tree k v bb" by (auto simp: tree_less_nit tree_greater_prop)
-      with `xx = yy` 2 `xx = k` show ?thesis by (simp add: combine_in_tree)
-    qed (simp add: combine_in_tree)
-  qed simp+
-next    
-  case (3 xx lta zz vv rta yy ss bb)
-  def mt[simp]: mt == "Branch B lta zz vv rta"
-  from 3 have "inv2 mt \<and> inv1 mt" by simp
-  hence "inv2 (del xx mt) \<and> (color_of mt = R \<and> bheight (del xx mt) = bheight mt \<and> inv1 (del xx mt) \<or> color_of mt = B \<and> bheight (del xx mt) = bheight mt - 1 \<and> inv1l (del xx mt))" by (blast dest: del_inv1_inv2)
-  with 3 have 4: "entry_in_tree k v (del_from_left xx mt yy ss bb) = (False \<or> xx \<noteq> k \<and> entry_in_tree k v mt \<or> (k = yy \<and> v = ss) \<or> entry_in_tree k v bb)" by (simp add: balance_left_in_tree)
-  thus ?case proof (cases "xx = k")
-    case True
-    from 3 True have "tree_greater yy bb \<and> yy > k" by simp
-    hence "tree_greater k bb" by (blast dest: tree_greater_trans)
-    with 3 4 True show ?thesis by (auto simp: tree_greater_nit)
-  qed auto
-next
-  case ("4_1" xx yy ss bb)
-  show ?case proof (cases "xx = k")
-    case True
-    with "4_1" have "tree_greater yy bb \<and> k < yy" by simp
-    hence "tree_greater k bb" by (blast dest: tree_greater_trans)
-    with "4_1" `xx = k` 
-   have "entry_in_tree k v (Branch R Empty yy ss bb) = entry_in_tree k v Empty" by (auto simp: tree_greater_nit)
-    thus ?thesis by auto
-  qed simp+
-next
-  case ("4_2" xx vaa vbb vdd vc yy ss bb)
-  thus ?case proof (cases "xx = k")
-    case True
-    with "4_2" have "k < yy \<and> tree_greater yy bb" by simp
-    hence "tree_greater k bb" by (blast dest: tree_greater_trans)
-    with True "4_2" show ?thesis by (auto simp: tree_greater_nit)
-  qed auto
-next
-  case (5 xx aa yy ss lta zz vv rta)
-  def mt[simp]: mt == "Branch B lta zz vv rta"
-  from 5 have "inv2 mt \<and> inv1 mt" by simp
-  hence "inv2 (del xx mt) \<and> (color_of mt = R \<and> bheight (del xx mt) = bheight mt \<and> inv1 (del xx mt) \<or> color_of mt = B \<and> bheight (del xx mt) = bheight mt - 1 \<and> inv1l (del xx mt))" by (blast dest: del_inv1_inv2)
-  with 5 have 3: "entry_in_tree k v (del_from_right xx aa yy ss mt) = (entry_in_tree k v aa \<or> (k = yy \<and> v = ss) \<or> False \<or> xx \<noteq> k \<and> entry_in_tree k v mt)" by (simp add: balance_right_in_tree)
-  thus ?case proof (cases "xx = k")
-    case True
-    from 5 True have "tree_less yy aa \<and> yy < k" by simp
-    hence "tree_less k aa" by (blast dest: tree_less_trans)
-    with 3 5 True show ?thesis by (auto simp: tree_less_nit)
-  qed auto
-next
-  case ("6_1" xx aa yy ss)
-  show ?case proof (cases "xx = k")
-    case True
-    with "6_1" have "tree_less yy aa \<and> k > yy" by simp
-    hence "tree_less k aa" by (blast dest: tree_less_trans)
-    with "6_1" `xx = k` show ?thesis by (auto simp: tree_less_nit)
-  qed simp
-next
-  case ("6_2" xx aa yy ss vaa vbb vdd vc)
-  thus ?case proof (cases "xx = k")
-    case True
-    with "6_2" have "k > yy \<and> tree_less yy aa" by simp
-    hence "tree_less k aa" by (blast dest: tree_less_trans)
-    with True "6_2" show ?thesis by (auto simp: tree_less_nit)
-  qed auto
-qed simp
+lemma lookup_empty_empty [simp]:
+  "lookup t = Map.empty \<longleftrightarrow> t = empty"
+  by (cases t) (simp add: empty_def lookup_def RBT_inject RBT_inverse)
+
+lemma sorted_keys [iff]:
+  "sorted (keys t)"
+  by (simp add: keys_def RBT_Impl.keys_def sorted_entries)
+
+lemma distinct_keys [iff]:
+  "distinct (keys t)"
+  by (simp add: keys_def RBT_Impl.keys_def distinct_entries)
 
 
-definition delete where
-  delete_def: "delete k t = paint B (del k t)"
+subsection {* Implementation of mappings *}
 
-theorem delete_is_rbt [simp]: assumes "is_rbt t" shows "is_rbt (delete k t)"
-proof -
-  from assms have "inv2 t" and "inv1 t" unfolding is_rbt_def by auto 
-  hence "inv2 (del k t) \<and> (color_of t = R \<and> bheight (del k t) = bheight t \<and> inv1 (del k t) \<or> color_of t = B \<and> bheight (del k t) = bheight t - 1 \<and> inv1l (del k t))" by (rule del_inv1_inv2)
-  hence "inv2 (del k t) \<and> inv1l (del k t)" by (cases "color_of t") auto
-  with assms show ?thesis
-    unfolding is_rbt_def delete_def
-    by (auto intro: paint_sorted del_sorted)
-qed
-
-lemma delete_in_tree: 
-  assumes "is_rbt t" 
-  shows "entry_in_tree k v (delete x t) = (x \<noteq> k \<and> entry_in_tree k v t)"
-  using assms unfolding is_rbt_def delete_def
-  by (auto simp: del_in_tree)
+definition Mapping :: "('a\<Colon>linorder, 'b) rbt \<Rightarrow> ('a, 'b) mapping" where
+  "Mapping t = Mapping.Mapping (lookup t)"
 
-lemma lookup_delete:
-  assumes is_rbt: "is_rbt t"
-  shows "lookup (delete k t) = (lookup t)|`(-{k})"
-proof
-  fix x
-  show "lookup (delete k t) x = (lookup t |` (-{k})) x" 
-  proof (cases "x = k")
-    assume "x = k" 
-    with is_rbt show ?thesis
-      by (cases "lookup (delete k t) k") (auto simp: lookup_in_tree delete_in_tree)
-  next
-    assume "x \<noteq> k"
-    thus ?thesis
-      by auto (metis is_rbt delete_is_rbt delete_in_tree is_rbt_sorted lookup_from_in_tree)
-  qed
-qed
+code_datatype Mapping
 
-
-subsection {* Union *}
+lemma lookup_Mapping [simp, code]:
+  "Mapping.lookup (Mapping t) = lookup t"
+  by (simp add: Mapping_def)
 
-primrec
-  union_with_key :: "('a\<Colon>linorder \<Rightarrow> 'b \<Rightarrow> 'b \<Rightarrow> 'b) \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
-where
-  "union_with_key f t Empty = t"
-| "union_with_key f t (Branch c lt k v rt) = union_with_key f (union_with_key f (insert_with_key f k v t) lt) rt"
-
-lemma unionwk_sorted: "sorted lt \<Longrightarrow> sorted (union_with_key f lt rt)" 
-  by (induct rt arbitrary: lt) (auto simp: insertwk_sorted)
-theorem unionwk_is_rbt[simp]: "is_rbt lt \<Longrightarrow> is_rbt (union_with_key f lt rt)" 
-  by (induct rt arbitrary: lt) (simp add: insertwk_is_rbt)+
+lemma empty_Mapping [code]:
+  "Mapping.empty = Mapping empty"
+  by (rule mapping_eqI) simp
 
-definition
-  union_with where
-  "union_with f = union_with_key (\<lambda>_. f)"
-
-theorem unionw_is_rbt: "is_rbt lt \<Longrightarrow> is_rbt (union_with f lt rt)" unfolding union_with_def by simp
-
-definition union where
-  "union = union_with_key (%_ _ rv. rv)"
-
-theorem union_is_rbt: "is_rbt lt \<Longrightarrow> is_rbt (union lt rt)" unfolding union_def by simp
+lemma is_empty_Mapping [code]:
+  "Mapping.is_empty (Mapping t) \<longleftrightarrow> is_empty t"
+  by (simp add: rbt_eq Mapping.is_empty_empty Mapping_def)
 
-lemma union_Branch[simp]:
-  "union t (Branch c lt k v rt) = union (union (insert k v t) lt) rt"
-  unfolding union_def insert_def
-  by simp
+lemma insert_Mapping [code]:
+  "Mapping.update k v (Mapping t) = Mapping (insert k v t)"
+  by (rule mapping_eqI) simp
 
-lemma lookup_union:
-  assumes "is_rbt s" "sorted t"
-  shows "lookup (union s t) = lookup s ++ lookup t"
-using assms
-proof (induct t arbitrary: s)
-  case Empty thus ?case by (auto simp: union_def)
-next
-  case (Branch c l k v r s)
-  then have "sorted r" "sorted l" "l |\<guillemotleft> k" "k \<guillemotleft>| r" by auto
-
-  have meq: "lookup s(k \<mapsto> v) ++ lookup l ++ lookup r =
-    lookup s ++
-    (\<lambda>a. if a < k then lookup l a
-    else if k < a then lookup r a else Some v)" (is "?m1 = ?m2")
-  proof (rule ext)
-    fix a
+lemma delete_Mapping [code]:
+  "Mapping.delete k (Mapping xs) = Mapping (delete k xs)"
+  by (rule mapping_eqI) simp
 
-   have "k < a \<or> k = a \<or> k > a" by auto
-    thus "?m1 a = ?m2 a"
-    proof (elim disjE)
-      assume "k < a"
-      with `l |\<guillemotleft> k` have "l |\<guillemotleft> a" by (rule tree_less_trans)
-      with `k < a` show ?thesis
-        by (auto simp: map_add_def split: option.splits)
-    next
-      assume "k = a"
-      with `l |\<guillemotleft> k` `k \<guillemotleft>| r` 
-      show ?thesis by (auto simp: map_add_def)
-    next
-      assume "a < k"
-      from this `k \<guillemotleft>| r` have "a \<guillemotleft>| r" by (rule tree_greater_trans)
-      with `a < k` show ?thesis
-        by (auto simp: map_add_def split: option.splits)
-    qed
-  qed
+lemma keys_Mapping [code]:
+  "Mapping.keys (Mapping t) = set (keys t)"
+  by (simp add: keys_def Mapping_def Mapping.keys_def lookup_def lookup_keys)
 
-  from Branch have is_rbt: "is_rbt (RBT.union (RBT.insert k v s) l)"
-    by (auto intro: union_is_rbt insert_is_rbt)
-  with Branch have IHs:
-    "lookup (union (union (insert k v s) l) r) = lookup (union (insert k v s) l) ++ lookup r"
-    "lookup (union (insert k v s) l) = lookup (insert k v s) ++ lookup l"
-    by auto
-  
-  with meq show ?case
-    by (auto simp: lookup_insert[OF Branch(3)])
+lemma ordered_keys_Mapping [code]:
+  "Mapping.ordered_keys (Mapping t) = keys t"
+  by (rule sorted_distinct_set_unique) (simp_all add: ordered_keys_def keys_Mapping)
 
-qed
-
-
-subsection {* Modifying existing entries *}
-
-primrec
-  map_entry :: "'a\<Colon>linorder \<Rightarrow> ('b \<Rightarrow> 'b) \<Rightarrow> ('a, 'b) rbt \<Rightarrow> ('a, 'b) rbt"
-where
-  "map_entry k f Empty = Empty"
-| "map_entry k f (Branch c lt x v rt) =
-    (if k < x then Branch c (map_entry k f lt) x v rt
-    else if k > x then (Branch c lt x v (map_entry k f rt))
-    else Branch c lt x (f v) rt)"
+lemma Mapping_size_card_keys: (*FIXME*)
+  "Mapping.size m = card (Mapping.keys m)"
+  by (simp add: Mapping.size_def Mapping.keys_def)
 
-lemma map_entry_color_of: "color_of (map_entry k f t) = color_of t" by (induct t) simp+
-lemma map_entry_inv1: "inv1 (map_entry k f t) = inv1 t" by (induct t) (simp add: map_entry_color_of)+
-lemma map_entry_inv2: "inv2 (map_entry k f t) = inv2 t" "bheight (map_entry k f t) = bheight t" by (induct t) simp+
-lemma map_entry_tree_greater: "tree_greater a (map_entry k f t) = tree_greater a t" by (induct t) simp+
-lemma map_entry_tree_less: "tree_less a (map_entry k f t) = tree_less a t" by (induct t) simp+
-lemma map_entry_sorted: "sorted (map_entry k f t) = sorted t"
-  by (induct t) (simp_all add: map_entry_tree_less map_entry_tree_greater)
-
-theorem map_entry_is_rbt [simp]: "is_rbt (map_entry k f t) = is_rbt t" 
-unfolding is_rbt_def by (simp add: map_entry_inv2 map_entry_color_of map_entry_sorted map_entry_inv1 )
-
-theorem lookup_map_entry:
-  "lookup (map_entry k f t) = (lookup t)(k := Option.map f (lookup t k))"
-  by (induct t) (auto split: option.splits simp add: expand_fun_eq)
-
+lemma size_Mapping [code]:
+  "Mapping.size (Mapping t) = length (keys t)"
+  by (simp add: Mapping_size_card_keys keys_Mapping distinct_card)
 
-subsection {* Mapping all entries *}
-
-primrec
-  map :: "('a \<Rightarrow> 'b \<Rightarrow> 'c) \<Rightarrow> ('a, 'b) rbt \<Rightarrow> ('a, 'c) rbt"
-where
-  "map f Empty = Empty"
-| "map f (Branch c lt k v rt) = Branch c (map f lt) k (f k v) (map f rt)"
+lemma tabulate_Mapping [code]:
+  "Mapping.tabulate ks f = Mapping (bulkload (List.map (\<lambda>k. (k, f k)) ks))"
+  by (rule mapping_eqI) (simp add: map_of_map_restrict)
 
-lemma map_entries [simp]: "entries (map f t) = List.map (\<lambda>(k, v). (k, f k v)) (entries t)"
-  by (induct t) auto
-lemma map_keys [simp]: "keys (map f t) = keys t" by (simp add: keys_def split_def)
-lemma map_tree_greater: "tree_greater k (map f t) = tree_greater k t" by (induct t) simp+
-lemma map_tree_less: "tree_less k (map f t) = tree_less k t" by (induct t) simp+
-lemma map_sorted: "sorted (map f t) = sorted t"  by (induct t) (simp add: map_tree_less map_tree_greater)+
-lemma map_color_of: "color_of (map f t) = color_of t" by (induct t) simp+
-lemma map_inv1: "inv1 (map f t) = inv1 t" by (induct t) (simp add: map_color_of)+
-lemma map_inv2: "inv2 (map f t) = inv2 t" "bheight (map f t) = bheight t" by (induct t) simp+
-theorem map_is_rbt [simp]: "is_rbt (map f t) = is_rbt t" 
-unfolding is_rbt_def by (simp add: map_inv1 map_inv2 map_sorted map_color_of)
-
-theorem lookup_map: "lookup (map f t) x = Option.map (f x) (lookup t x)"
-  by (induct t) auto
-
-
-subsection {* Folding over entries *}
-
-definition fold :: "('a \<Rightarrow> 'b \<Rightarrow> 'c \<Rightarrow> 'c) \<Rightarrow> ('a, 'b) rbt \<Rightarrow> 'c \<Rightarrow> 'c" where
-  "fold f t s = foldl (\<lambda>s (k, v). f k v s) s (entries t)"
+lemma bulkload_Mapping [code]:
+  "Mapping.bulkload vs = Mapping (bulkload (List.map (\<lambda>n. (n, vs ! n)) [0..<length vs]))"
+  by (rule mapping_eqI) (simp add: map_of_map_restrict expand_fun_eq)
 
-lemma fold_simps [simp, code]:
-  "fold f Empty = id"
-  "fold f (Branch c lt k v rt) = fold f rt \<circ> f k v \<circ> fold f lt"
-  by (simp_all add: fold_def expand_fun_eq)
-
-
-subsection {* Bulkloading a tree *}
-
-definition bulkload :: "('a \<times> 'b) list \<Rightarrow> ('a\<Colon>linorder, 'b) rbt" where
-  "bulkload xs = foldr (\<lambda>(k, v). RBT.insert k v) xs RBT.Empty"
-
-lemma bulkload_is_rbt [simp, intro]:
-  "is_rbt (bulkload xs)"
-  unfolding bulkload_def by (induct xs) auto
+lemma [code, code del]: "HOL.eq (x :: (_, _) mapping) y \<longleftrightarrow> x = y" by (fact eq_equals) (*FIXME*)
 
-lemma lookup_bulkload:
-  "RBT.lookup (bulkload xs) = map_of xs"
-proof -
-  obtain ys where "ys = rev xs" by simp
-  have "\<And>t. is_rbt t \<Longrightarrow>
-    RBT.lookup (foldl (\<lambda>t (k, v). RBT.insert k v t) t ys) = RBT.lookup t ++ map_of (rev ys)"
-      by (induct ys) (simp_all add: bulkload_def split_def RBT.lookup_insert)
-  from this Empty_is_rbt have
-    "RBT.lookup (foldl (\<lambda>t (k, v). RBT.insert k v t) RBT.Empty (rev xs)) = RBT.lookup RBT.Empty ++ map_of xs"
-     by (simp add: `ys = rev xs`)
-  then show ?thesis by (simp add: bulkload_def foldl_foldr lookup_Empty split_def)
-qed
+lemma eq_Mapping [code]:
+  "HOL.eq (Mapping t1) (Mapping t2) \<longleftrightarrow> entries t1 = entries t2"
+  by (simp add: eq Mapping_def entries_lookup)
 
-hide (open) const Empty insert delete entries bulkload lookup map_entry map fold union sorted
+hide_const (open) impl_of lookup empty insert delete
+  entries keys bulkload map_entry map fold
 (*>*)
 
 text {* 
-  This theory defines purely functional red-black trees which can be
-  used as an efficient representation of finite maps.
+  This theory defines abstract red-black trees as an efficient
+  representation of finite maps, backed by the implementation
+  in @{theory RBT_Impl}.
 *}
 
-
 subsection {* Data type and invariant *}
 
 text {*
-  The type @{typ "('k, 'v) rbt"} denotes red-black trees with keys of
-  type @{typ "'k"} and values of type @{typ "'v"}. To function
-  properly, the key type musorted belong to the @{text "linorder"} class.
+  The type @{typ "('k, 'v) RBT_Impl.rbt"} denotes red-black trees with
+  keys of type @{typ "'k"} and values of type @{typ "'v"}. To function
+  properly, the key type musorted belong to the @{text "linorder"}
+  class.
 
   A value @{term t} of this type is a valid red-black tree if it
-  satisfies the invariant @{text "is_rbt t"}.
-  This theory provides lemmas to prove that the invariant is
-  satisfied throughout the computation.
+  satisfies the invariant @{text "is_rbt t"}.  The abstract type @{typ
+  "('k, 'v) rbt"} always obeys this invariant, and for this reason you
+  should only use this in our application.  Going back to @{typ "('k,
+  'v) RBT_Impl.rbt"} may be necessary in proofs if not yet proven
+  properties about the operations must be established.
 
   The interpretation function @{const "RBT.lookup"} returns the partial
   map represented by a red-black tree:
@@ -1106,15 +259,12 @@
   $O(\log n)$.  
 *}
 
-
 subsection {* Operations *}
 
-print_antiquotations
-
 text {*
   Currently, the following operations are supported:
 
-  @{term_type [display] "RBT.Empty"}
+  @{term_type [display] "RBT.empty"}
   Returns the empty tree. $O(1)$
 
   @{term_type [display] "RBT.insert"}
@@ -1137,9 +287,6 @@
 
   @{term_type [display] "RBT.fold"}
   Folds over all entries in a tree. $O(n)$
-
-  @{term_type [display] "RBT.union"}
-  Forms the union of two trees, preferring entries from the first one.
 *}
 
 
@@ -1173,8 +320,8 @@
 
 text {*
   \noindent
-  \underline{@{text "lookup_Empty"}}
-  @{thm [display] lookup_Empty}
+  \underline{@{text "lookup_empty"}}
+  @{thm [display] lookup_empty}
   \vspace{1ex}
 
   \noindent
@@ -1196,11 +343,6 @@
   \underline{@{text "lookup_map"}}
   @{thm [display] lookup_map}
   \vspace{1ex}
-
-  \noindent
-  \underline{@{text "lookup_union"}}
-  @{thm [display] lookup_union}
-  \vspace{1ex}
 *}
 
 end
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/src/HOL/Library/RBT_Impl.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -0,0 +1,1084 @@
+(*  Title:      RBT_Impl.thy
+    Author:     Markus Reiter, TU Muenchen
+    Author:     Alexander Krauss, TU Muenchen
+*)
+
+header {* Implementation of Red-Black Trees *}
+
+theory RBT_Impl
+imports Main
+begin
+
+text {*
+  For applications, you should use theory @{text RBT} which defines
+  an abstract type of red-black tree obeying the invariant.
+*}
+
+subsection {* Datatype of RB trees *}
+
+datatype color = R | B
+datatype ('a, 'b) rbt = Empty | Branch color "('a, 'b) rbt" 'a 'b "('a, 'b) rbt"
+
+lemma rbt_cases:
+  obtains (Empty) "t = Empty" 
+  | (Red) l k v r where "t = Branch R l k v r" 
+  | (Black) l k v r where "t = Branch B l k v r"
+proof (cases t)
+  case Empty with that show thesis by blast
+next
+  case (Branch c) with that show thesis by (cases c) blast+
+qed
+
+subsection {* Tree properties *}
+
+subsubsection {* Content of a tree *}
+
+primrec entries :: "('a, 'b) rbt \<Rightarrow> ('a \<times> 'b) list"
+where 
+  "entries Empty = []"
+| "entries (Branch _ l k v r) = entries l @ (k,v) # entries r"
+
+abbreviation (input) entry_in_tree :: "'a \<Rightarrow> 'b \<Rightarrow> ('a, 'b) rbt \<Rightarrow> bool"
+where
+  "entry_in_tree k v t \<equiv> (k, v) \<in> set (entries t)"
+
+definition keys :: "('a, 'b) rbt \<Rightarrow> 'a list" where
+  "keys t = map fst (entries t)"
+
+lemma keys_simps [simp, code]:
+  "keys Empty = []"
+  "keys (Branch c l k v r) = keys l @ k # keys r"
+  by (simp_all add: keys_def)
+
+lemma entry_in_tree_keys:
+  assumes "(k, v) \<in> set (entries t)"
+  shows "k \<in> set (keys t)"
+proof -
+  from assms have "fst (k, v) \<in> fst ` set (entries t)" by (rule imageI)
+  then show ?thesis by (simp add: keys_def)
+qed
+
+lemma keys_entries:
+  "k \<in> set (keys t) \<longleftrightarrow> (\<exists>v. (k, v) \<in> set (entries t))"
+  by (auto intro: entry_in_tree_keys) (auto simp add: keys_def)
+
+
+subsubsection {* Search tree properties *}
+
+definition tree_less :: "'a\<Colon>order \<Rightarrow> ('a, 'b) rbt \<Rightarrow> bool"
+where
+  tree_less_prop: "tree_less k t \<longleftrightarrow> (\<forall>x\<in>set (keys t). x < k)"
+
+abbreviation tree_less_symbol (infix "|\<guillemotleft>" 50)
+where "t |\<guillemotleft> x \<equiv> tree_less x t"
+
+definition tree_greater :: "'a\<Colon>order \<Rightarrow> ('a, 'b) rbt \<Rightarrow> bool" (infix "\<guillemotleft>|" 50) 
+where
+  tree_greater_prop: "tree_greater k t = (\<forall>x\<in>set (keys t). k < x)"
+
+lemma tree_less_simps [simp]:
+  "tree_less k Empty = True"
+  "tree_less k (Branch c lt kt v rt) \<longleftrightarrow> kt < k \<and> tree_less k lt \<and> tree_less k rt"
+  by (auto simp add: tree_less_prop)
+
+lemma tree_greater_simps [simp]:
+  "tree_greater k Empty = True"
+  "tree_greater k (Branch c lt kt v rt) \<longleftrightarrow> k < kt \<and> tree_greater k lt \<and> tree_greater k rt"
+  by (auto simp add: tree_greater_prop)
+
+lemmas tree_ord_props = tree_less_prop tree_greater_prop
+
+lemmas tree_greater_nit = tree_greater_prop entry_in_tree_keys
+lemmas tree_less_nit = tree_less_prop entry_in_tree_keys
+
+lemma tree_less_eq_trans: "l |\<guillemotleft> u \<Longrightarrow> u \<le> v \<Longrightarrow> l |\<guillemotleft> v"
+  and tree_less_trans: "t |\<guillemotleft> x \<Longrightarrow> x < y \<Longrightarrow> t |\<guillemotleft> y"
+  and tree_greater_eq_trans: "u \<le> v \<Longrightarrow> v \<guillemotleft>| r \<Longrightarrow> u \<guillemotleft>| r"
+  and tree_greater_trans: "x < y \<Longrightarrow> y \<guillemotleft>| t \<Longrightarrow> x \<guillemotleft>| t"
+  by (auto simp: tree_ord_props)
+
+primrec sorted :: "('a::linorder, 'b) rbt \<Rightarrow> bool"
+where
+  "sorted Empty = True"
+| "sorted (Branch c l k v r) = (l |\<guillemotleft> k \<and> k \<guillemotleft>| r \<and> sorted l \<and> sorted r)"
+
+lemma sorted_entries:
+  "sorted t \<Longrightarrow> List.sorted (List.map fst (entries t))"
+by (induct t) 
+  (force simp: sorted_append sorted_Cons tree_ord_props 
+      dest!: entry_in_tree_keys)+
+
+lemma distinct_entries:
+  "sorted t \<Longrightarrow> distinct (List.map fst (entries t))"
+by (induct t) 
+  (force simp: sorted_append sorted_Cons tree_ord_props 
+      dest!: entry_in_tree_keys)+
+
+
+subsubsection {* Tree lookup *}
+
+primrec lookup :: "('a\<Colon>linorder, 'b) rbt \<Rightarrow> 'a \<rightharpoonup> 'b"
+where
+  "lookup Empty k = None"
+| "lookup (Branch _ l x y r) k = (if k < x then lookup l k else if x < k then lookup r k else Some y)"
+
+lemma lookup_keys: "sorted t \<Longrightarrow> dom (lookup t) = set (keys t)"
+  by (induct t) (auto simp: dom_def tree_greater_prop tree_less_prop)
+
+lemma dom_lookup_Branch: 
+  "sorted (Branch c t1 k v t2) \<Longrightarrow> 
+    dom (lookup (Branch c t1 k v t2)) 
+    = Set.insert k (dom (lookup t1) \<union> dom (lookup t2))"
+proof -
+  assume "sorted (Branch c t1 k v t2)"
+  moreover from this have "sorted t1" "sorted t2" by simp_all
+  ultimately show ?thesis by (simp add: lookup_keys)
+qed
+
+lemma finite_dom_lookup [simp, intro!]: "finite (dom (lookup t))"
+proof (induct t)
+  case Empty then show ?case by simp
+next
+  case (Branch color t1 a b t2)
+  let ?A = "Set.insert a (dom (lookup t1) \<union> dom (lookup t2))"
+  have "dom (lookup (Branch color t1 a b t2)) \<subseteq> ?A" by (auto split: split_if_asm)
+  moreover from Branch have "finite (insert a (dom (lookup t1) \<union> dom (lookup t2)))" by simp
+  ultimately show ?case by (rule finite_subset)
+qed 
+
+lemma lookup_tree_less[simp]: "t |\<guillemotleft> k \<Longrightarrow> lookup t k = None" 
+by (induct t) auto
+
+lemma lookup_tree_greater[simp]: "k \<guillemotleft>| t \<Longrightarrow> lookup t k = None"
+by (induct t) auto
+
+lemma lookup_Empty: "lookup Empty = empty"
+by (rule ext) simp
+
+lemma map_of_entries:
+  "sorted t \<Longrightarrow> map_of (entries t) = lookup t"
+proof (induct t)
+  case Empty thus ?case by (simp add: lookup_Empty)
+next
+  case (Branch c t1 k v t2)
+  have "lookup (Branch c t1 k v t2) = lookup t2 ++ [k\<mapsto>v] ++ lookup t1"
+  proof (rule ext)
+    fix x
+    from Branch have SORTED: "sorted (Branch c t1 k v t2)" by simp
+    let ?thesis = "lookup (Branch c t1 k v t2) x = (lookup t2 ++ [k \<mapsto> v] ++ lookup t1) x"
+
+    have DOM_T1: "!!k'. k'\<in>dom (lookup t1) \<Longrightarrow> k>k'"
+    proof -
+      fix k'
+      from SORTED have "t1 |\<guillemotleft> k" by simp
+      with tree_less_prop have "\<forall>k'\<in>set (keys t1). k>k'" by auto
+      moreover assume "k'\<in>dom (lookup t1)"
+      ultimately show "k>k'" using lookup_keys SORTED by auto
+    qed
+    
+    have DOM_T2: "!!k'. k'\<in>dom (lookup t2) \<Longrightarrow> k<k'"
+    proof -
+      fix k'
+      from SORTED have "k \<guillemotleft>| t2" by simp
+      with tree_greater_prop have "\<forall>k'\<in>set (keys t2). k<k'" by auto
+      moreover assume "k'\<in>dom (lookup t2)"
+      ultimately show "k<k'" using lookup_keys SORTED by auto
+    qed
+    
+    {
+      assume C: "x<k"
+      hence "lookup (Branch c t1 k v t2) x = lookup t1 x" by simp
+      moreover from C have "x\<notin>dom [k\<mapsto>v]" by simp
+      moreover have "x\<notin>dom (lookup t2)" proof
+        assume "x\<in>dom (lookup t2)"
+        with DOM_T2 have "k<x" by blast
+        with C show False by simp
+      qed
+      ultimately have ?thesis by (simp add: map_add_upd_left map_add_dom_app_simps)
+    } moreover {
+      assume [simp]: "x=k"
+      hence "lookup (Branch c t1 k v t2) x = [k \<mapsto> v] x" by simp
+      moreover have "x\<notin>dom (lookup t1)" proof
+        assume "x\<in>dom (lookup t1)"
+        with DOM_T1 have "k>x" by blast
+        thus False by simp
+      qed
+      ultimately have ?thesis by (simp add: map_add_upd_left map_add_dom_app_simps)
+    } moreover {
+      assume C: "x>k"
+      hence "lookup (Branch c t1 k v t2) x = lookup t2 x" by (simp add: less_not_sym[of k x])
+      moreover from C have "x\<notin>dom [k\<mapsto>v]" by simp
+      moreover have "x\<notin>dom (lookup t1)" proof
+        assume "x\<in>dom (lookup t1)"
+        with DOM_T1 have "k>x" by simp
+        with C show False by simp
+      qed
+      ultimately have ?thesis by (simp add: map_add_upd_left map_add_dom_app_simps)
+    } ultimately show ?thesis using less_linear by blast
+  qed
+  also from Branch have "lookup t2 ++ [k \<mapsto> v] ++ lookup t1 = map_of (entries (Branch c t1 k v t2))" by simp
+  finally show ?case by simp
+qed
+
+lemma lookup_in_tree: "sorted t \<Longrightarrow> lookup t k = Some v \<longleftrightarrow> (k, v) \<in> set (entries t)"
+  by (simp add: map_of_entries [symmetric] distinct_entries)
+
+lemma set_entries_inject:
+  assumes sorted: "sorted t1" "sorted t2" 
+  shows "set (entries t1) = set (entries t2) \<longleftrightarrow> entries t1 = entries t2"
+proof -
+  from sorted have "distinct (map fst (entries t1))"
+    "distinct (map fst (entries t2))"
+    by (auto intro: distinct_entries)
+  with sorted show ?thesis
+    by (auto intro: map_sorted_distinct_set_unique sorted_entries simp add: distinct_map)
+qed
+
+lemma entries_eqI:
+  assumes sorted: "sorted t1" "sorted t2" 
+  assumes lookup: "lookup t1 = lookup t2"
+  shows "entries t1 = entries t2"
+proof -
+  from sorted lookup have "map_of (entries t1) = map_of (entries t2)"
+    by (simp add: map_of_entries)
+  with sorted have "set (entries t1) = set (entries t2)"
+    by (simp add: map_of_inject_set distinct_entries)
+  with sorted show ?thesis by (simp add: set_entries_inject)
+qed
+
+lemma entries_lookup:
+  assumes "sorted t1" "sorted t2" 
+  shows "entries t1 = entries t2 \<longleftrightarrow> lookup t1 = lookup t2"
+  using assms by (auto intro: entries_eqI simp add: map_of_entries [symmetric])
+
+lemma lookup_from_in_tree: 
+  assumes "sorted t1" "sorted t2" 
+  and "\<And>v. (k\<Colon>'a\<Colon>linorder, v) \<in> set (entries t1) \<longleftrightarrow> (k, v) \<in> set (entries t2)" 
+  shows "lookup t1 k = lookup t2 k"
+proof -
+  from assms have "k \<in> dom (lookup t1) \<longleftrightarrow> k \<in> dom (lookup t2)"
+    by (simp add: keys_entries lookup_keys)
+  with assms show ?thesis by (auto simp add: lookup_in_tree [symmetric])
+qed
+
+
+subsubsection {* Red-black properties *}
+
+primrec color_of :: "('a, 'b) rbt \<Rightarrow> color"
+where
+  "color_of Empty = B"
+| "color_of (Branch c _ _ _ _) = c"
+
+primrec bheight :: "('a,'b) rbt \<Rightarrow> nat"
+where
+  "bheight Empty = 0"
+| "bheight (Branch c lt k v rt) = (if c = B then Suc (bheight lt) else bheight lt)"
+
+primrec inv1 :: "('a, 'b) rbt \<Rightarrow> bool"
+where
+  "inv1 Empty = True"
+| "inv1 (Branch c lt k v rt) \<longleftrightarrow> inv1 lt \<and> inv1 rt \<and> (c = B \<or> color_of lt = B \<and> color_of rt = B)"
+
+primrec inv1l :: "('a, 'b) rbt \<Rightarrow> bool" -- {* Weaker version *}
+where
+  "inv1l Empty = True"
+| "inv1l (Branch c l k v r) = (inv1 l \<and> inv1 r)"
+lemma [simp]: "inv1 t \<Longrightarrow> inv1l t" by (cases t) simp+
+
+primrec inv2 :: "('a, 'b) rbt \<Rightarrow> bool"
+where
+  "inv2 Empty = True"
+| "inv2 (Branch c lt k v rt) = (inv2 lt \<and> inv2 rt \<and> bheight lt = bheight rt)"
+
+definition is_rbt :: "('a\<Colon>linorder, 'b) rbt \<Rightarrow> bool" where
+  "is_rbt t \<longleftrightarrow> inv1 t \<and> inv2 t \<and> color_of t = B \<and> sorted t"
+
+lemma is_rbt_sorted [simp]:
+  "is_rbt t \<Longrightarrow> sorted t" by (simp add: is_rbt_def)
+
+theorem Empty_is_rbt [simp]:
+  "is_rbt Empty" by (simp add: is_rbt_def)
+
+
+subsection {* Insertion *}
+
+fun (* slow, due to massive case splitting *)
+  balance :: "('a,'b) rbt \<Rightarrow> 'a \<Rightarrow> 'b \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
+where
+  "balance (Branch R a w x b) s t (Branch R c y z d) = Branch R (Branch B a w x b) s t (Branch B c y z d)" |
+  "balance (Branch R (Branch R a w x b) s t c) y z d = Branch R (Branch B a w x b) s t (Branch B c y z d)" |
+  "balance (Branch R a w x (Branch R b s t c)) y z d = Branch R (Branch B a w x b) s t (Branch B c y z d)" |
+  "balance a w x (Branch R b s t (Branch R c y z d)) = Branch R (Branch B a w x b) s t (Branch B c y z d)" |
+  "balance a w x (Branch R (Branch R b s t c) y z d) = Branch R (Branch B a w x b) s t (Branch B c y z d)" |
+  "balance a s t b = Branch B a s t b"
+
+lemma balance_inv1: "\<lbrakk>inv1l l; inv1l r\<rbrakk> \<Longrightarrow> inv1 (balance l k v r)" 
+  by (induct l k v r rule: balance.induct) auto
+
+lemma balance_bheight: "bheight l = bheight r \<Longrightarrow> bheight (balance l k v r) = Suc (bheight l)"
+  by (induct l k v r rule: balance.induct) auto
+
+lemma balance_inv2: 
+  assumes "inv2 l" "inv2 r" "bheight l = bheight r"
+  shows "inv2 (balance l k v r)"
+  using assms
+  by (induct l k v r rule: balance.induct) auto
+
+lemma balance_tree_greater[simp]: "(v \<guillemotleft>| balance a k x b) = (v \<guillemotleft>| a \<and> v \<guillemotleft>| b \<and> v < k)" 
+  by (induct a k x b rule: balance.induct) auto
+
+lemma balance_tree_less[simp]: "(balance a k x b |\<guillemotleft> v) = (a |\<guillemotleft> v \<and> b |\<guillemotleft> v \<and> k < v)"
+  by (induct a k x b rule: balance.induct) auto
+
+lemma balance_sorted: 
+  fixes k :: "'a::linorder"
+  assumes "sorted l" "sorted r" "l |\<guillemotleft> k" "k \<guillemotleft>| r"
+  shows "sorted (balance l k v r)"
+using assms proof (induct l k v r rule: balance.induct)
+  case ("2_2" a x w b y t c z s va vb vd vc)
+  hence "y < z \<and> z \<guillemotleft>| Branch B va vb vd vc" 
+    by (auto simp add: tree_ord_props)
+  hence "tree_greater y (Branch B va vb vd vc)" by (blast dest: tree_greater_trans)
+  with "2_2" show ?case by simp
+next
+  case ("3_2" va vb vd vc x w b y s c z)
+  from "3_2" have "x < y \<and> tree_less x (Branch B va vb vd vc)" 
+    by simp
+  hence "tree_less y (Branch B va vb vd vc)" by (blast dest: tree_less_trans)
+  with "3_2" show ?case by simp
+next
+  case ("3_3" x w b y s c z t va vb vd vc)
+  from "3_3" have "y < z \<and> tree_greater z (Branch B va vb vd vc)" by simp
+  hence "tree_greater y (Branch B va vb vd vc)" by (blast dest: tree_greater_trans)
+  with "3_3" show ?case by simp
+next
+  case ("3_4" vd ve vg vf x w b y s c z t va vb vii vc)
+  hence "x < y \<and> tree_less x (Branch B vd ve vg vf)" by simp
+  hence 1: "tree_less y (Branch B vd ve vg vf)" by (blast dest: tree_less_trans)
+  from "3_4" have "y < z \<and> tree_greater z (Branch B va vb vii vc)" by simp
+  hence "tree_greater y (Branch B va vb vii vc)" by (blast dest: tree_greater_trans)
+  with 1 "3_4" show ?case by simp
+next
+  case ("4_2" va vb vd vc x w b y s c z t dd)
+  hence "x < y \<and> tree_less x (Branch B va vb vd vc)" by simp
+  hence "tree_less y (Branch B va vb vd vc)" by (blast dest: tree_less_trans)
+  with "4_2" show ?case by simp
+next
+  case ("5_2" x w b y s c z t va vb vd vc)
+  hence "y < z \<and> tree_greater z (Branch B va vb vd vc)" by simp
+  hence "tree_greater y (Branch B va vb vd vc)" by (blast dest: tree_greater_trans)
+  with "5_2" show ?case by simp
+next
+  case ("5_3" va vb vd vc x w b y s c z t)
+  hence "x < y \<and> tree_less x (Branch B va vb vd vc)" by simp
+  hence "tree_less y (Branch B va vb vd vc)" by (blast dest: tree_less_trans)
+  with "5_3" show ?case by simp
+next
+  case ("5_4" va vb vg vc x w b y s c z t vd ve vii vf)
+  hence "x < y \<and> tree_less x (Branch B va vb vg vc)" by simp
+  hence 1: "tree_less y (Branch B va vb vg vc)" by (blast dest: tree_less_trans)
+  from "5_4" have "y < z \<and> tree_greater z (Branch B vd ve vii vf)" by simp
+  hence "tree_greater y (Branch B vd ve vii vf)" by (blast dest: tree_greater_trans)
+  with 1 "5_4" show ?case by simp
+qed simp+
+
+lemma entries_balance [simp]:
+  "entries (balance l k v r) = entries l @ (k, v) # entries r"
+  by (induct l k v r rule: balance.induct) auto
+
+lemma keys_balance [simp]: 
+  "keys (balance l k v r) = keys l @ k # keys r"
+  by (simp add: keys_def)
+
+lemma balance_in_tree:  
+  "entry_in_tree k x (balance l v y r) \<longleftrightarrow> entry_in_tree k x l \<or> k = v \<and> x = y \<or> entry_in_tree k x r"
+  by (auto simp add: keys_def)
+
+lemma lookup_balance[simp]: 
+fixes k :: "'a::linorder"
+assumes "sorted l" "sorted r" "l |\<guillemotleft> k" "k \<guillemotleft>| r"
+shows "lookup (balance l k v r) x = lookup (Branch B l k v r) x"
+by (rule lookup_from_in_tree) (auto simp:assms balance_in_tree balance_sorted)
+
+primrec paint :: "color \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
+where
+  "paint c Empty = Empty"
+| "paint c (Branch _ l k v r) = Branch c l k v r"
+
+lemma paint_inv1l[simp]: "inv1l t \<Longrightarrow> inv1l (paint c t)" by (cases t) auto
+lemma paint_inv1[simp]: "inv1l t \<Longrightarrow> inv1 (paint B t)" by (cases t) auto
+lemma paint_inv2[simp]: "inv2 t \<Longrightarrow> inv2 (paint c t)" by (cases t) auto
+lemma paint_color_of[simp]: "color_of (paint B t) = B" by (cases t) auto
+lemma paint_sorted[simp]: "sorted t \<Longrightarrow> sorted (paint c t)" by (cases t) auto
+lemma paint_in_tree[simp]: "entry_in_tree k x (paint c t) = entry_in_tree k x t" by (cases t) auto
+lemma paint_lookup[simp]: "lookup (paint c t) = lookup t" by (rule ext) (cases t, auto)
+lemma paint_tree_greater[simp]: "(v \<guillemotleft>| paint c t) = (v \<guillemotleft>| t)" by (cases t) auto
+lemma paint_tree_less[simp]: "(paint c t |\<guillemotleft> v) = (t |\<guillemotleft> v)" by (cases t) auto
+
+fun
+  ins :: "('a\<Colon>linorder \<Rightarrow> 'b \<Rightarrow> 'b \<Rightarrow> 'b) \<Rightarrow> 'a \<Rightarrow> 'b \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
+where
+  "ins f k v Empty = Branch R Empty k v Empty" |
+  "ins f k v (Branch B l x y r) = (if k < x then balance (ins f k v l) x y r
+                               else if k > x then balance l x y (ins f k v r)
+                               else Branch B l x (f k y v) r)" |
+  "ins f k v (Branch R l x y r) = (if k < x then Branch R (ins f k v l) x y r
+                               else if k > x then Branch R l x y (ins f k v r)
+                               else Branch R l x (f k y v) r)"
+
+lemma ins_inv1_inv2: 
+  assumes "inv1 t" "inv2 t"
+  shows "inv2 (ins f k x t)" "bheight (ins f k x t) = bheight t" 
+  "color_of t = B \<Longrightarrow> inv1 (ins f k x t)" "inv1l (ins f k x t)"
+  using assms
+  by (induct f k x t rule: ins.induct) (auto simp: balance_inv1 balance_inv2 balance_bheight)
+
+lemma ins_tree_greater[simp]: "(v \<guillemotleft>| ins f k x t) = (v \<guillemotleft>| t \<and> k > v)"
+  by (induct f k x t rule: ins.induct) auto
+lemma ins_tree_less[simp]: "(ins f k x t |\<guillemotleft> v) = (t |\<guillemotleft> v \<and> k < v)"
+  by (induct f k x t rule: ins.induct) auto
+lemma ins_sorted[simp]: "sorted t \<Longrightarrow> sorted (ins f k x t)"
+  by (induct f k x t rule: ins.induct) (auto simp: balance_sorted)
+
+lemma keys_ins: "set (keys (ins f k v t)) = { k } \<union> set (keys t)"
+  by (induct f k v t rule: ins.induct) auto
+
+lemma lookup_ins: 
+  fixes k :: "'a::linorder"
+  assumes "sorted t"
+  shows "lookup (ins f k v t) x = ((lookup t)(k |-> case lookup t k of None \<Rightarrow> v 
+                                                       | Some w \<Rightarrow> f k w v)) x"
+using assms by (induct f k v t rule: ins.induct) auto
+
+definition
+  insert_with_key :: "('a\<Colon>linorder \<Rightarrow> 'b \<Rightarrow> 'b \<Rightarrow> 'b) \<Rightarrow> 'a \<Rightarrow> 'b \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
+where
+  "insert_with_key f k v t = paint B (ins f k v t)"
+
+lemma insertwk_sorted: "sorted t \<Longrightarrow> sorted (insert_with_key f k x t)"
+  by (auto simp: insert_with_key_def)
+
+theorem insertwk_is_rbt: 
+  assumes inv: "is_rbt t" 
+  shows "is_rbt (insert_with_key f k x t)"
+using assms
+unfolding insert_with_key_def is_rbt_def
+by (auto simp: ins_inv1_inv2)
+
+lemma lookup_insertwk: 
+  assumes "sorted t"
+  shows "lookup (insert_with_key f k v t) x = ((lookup t)(k |-> case lookup t k of None \<Rightarrow> v 
+                                                       | Some w \<Rightarrow> f k w v)) x"
+unfolding insert_with_key_def using assms
+by (simp add:lookup_ins)
+
+definition
+  insertw_def: "insert_with f = insert_with_key (\<lambda>_. f)"
+
+lemma insertw_sorted: "sorted t \<Longrightarrow> sorted (insert_with f k v t)" by (simp add: insertwk_sorted insertw_def)
+theorem insertw_is_rbt: "is_rbt t \<Longrightarrow> is_rbt (insert_with f k v t)" by (simp add: insertwk_is_rbt insertw_def)
+
+lemma lookup_insertw:
+  assumes "is_rbt t"
+  shows "lookup (insert_with f k v t) = (lookup t)(k \<mapsto> (if k:dom (lookup t) then f (the (lookup t k)) v else v))"
+using assms
+unfolding insertw_def
+by (rule_tac ext) (cases "lookup t k", auto simp:lookup_insertwk dom_def)
+
+definition insert :: "'a\<Colon>linorder \<Rightarrow> 'b \<Rightarrow> ('a, 'b) rbt \<Rightarrow> ('a, 'b) rbt" where
+  "insert = insert_with_key (\<lambda>_ _ nv. nv)"
+
+lemma insert_sorted: "sorted t \<Longrightarrow> sorted (insert k v t)" by (simp add: insertwk_sorted insert_def)
+theorem insert_is_rbt [simp]: "is_rbt t \<Longrightarrow> is_rbt (insert k v t)" by (simp add: insertwk_is_rbt insert_def)
+
+lemma lookup_insert: 
+  assumes "is_rbt t"
+  shows "lookup (insert k v t) = (lookup t)(k\<mapsto>v)"
+unfolding insert_def
+using assms
+by (rule_tac ext) (simp add: lookup_insertwk split:option.split)
+
+
+subsection {* Deletion *}
+
+lemma bheight_paintR'[simp]: "color_of t = B \<Longrightarrow> bheight (paint R t) = bheight t - 1"
+by (cases t rule: rbt_cases) auto
+
+fun
+  balance_left :: "('a,'b) rbt \<Rightarrow> 'a \<Rightarrow> 'b \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
+where
+  "balance_left (Branch R a k x b) s y c = Branch R (Branch B a k x b) s y c" |
+  "balance_left bl k x (Branch B a s y b) = balance bl k x (Branch R a s y b)" |
+  "balance_left bl k x (Branch R (Branch B a s y b) t z c) = Branch R (Branch B bl k x a) s y (balance b t z (paint R c))" |
+  "balance_left t k x s = Empty"
+
+lemma balance_left_inv2_with_inv1:
+  assumes "inv2 lt" "inv2 rt" "bheight lt + 1 = bheight rt" "inv1 rt"
+  shows "bheight (balance_left lt k v rt) = bheight lt + 1"
+  and   "inv2 (balance_left lt k v rt)"
+using assms 
+by (induct lt k v rt rule: balance_left.induct) (auto simp: balance_inv2 balance_bheight)
+
+lemma balance_left_inv2_app: 
+  assumes "inv2 lt" "inv2 rt" "bheight lt + 1 = bheight rt" "color_of rt = B"
+  shows "inv2 (balance_left lt k v rt)" 
+        "bheight (balance_left lt k v rt) = bheight rt"
+using assms 
+by (induct lt k v rt rule: balance_left.induct) (auto simp add: balance_inv2 balance_bheight)+ 
+
+lemma balance_left_inv1: "\<lbrakk>inv1l a; inv1 b; color_of b = B\<rbrakk> \<Longrightarrow> inv1 (balance_left a k x b)"
+  by (induct a k x b rule: balance_left.induct) (simp add: balance_inv1)+
+
+lemma balance_left_inv1l: "\<lbrakk> inv1l lt; inv1 rt \<rbrakk> \<Longrightarrow> inv1l (balance_left lt k x rt)"
+by (induct lt k x rt rule: balance_left.induct) (auto simp: balance_inv1)
+
+lemma balance_left_sorted: "\<lbrakk> sorted l; sorted r; tree_less k l; tree_greater k r \<rbrakk> \<Longrightarrow> sorted (balance_left l k v r)"
+apply (induct l k v r rule: balance_left.induct)
+apply (auto simp: balance_sorted)
+apply (unfold tree_greater_prop tree_less_prop)
+by force+
+
+lemma balance_left_tree_greater: 
+  fixes k :: "'a::order"
+  assumes "k \<guillemotleft>| a" "k \<guillemotleft>| b" "k < x" 
+  shows "k \<guillemotleft>| balance_left a x t b"
+using assms 
+by (induct a x t b rule: balance_left.induct) auto
+
+lemma balance_left_tree_less: 
+  fixes k :: "'a::order"
+  assumes "a |\<guillemotleft> k" "b |\<guillemotleft> k" "x < k" 
+  shows "balance_left a x t b |\<guillemotleft> k"
+using assms
+by (induct a x t b rule: balance_left.induct) auto
+
+lemma balance_left_in_tree: 
+  assumes "inv1l l" "inv1 r" "bheight l + 1 = bheight r"
+  shows "entry_in_tree k v (balance_left l a b r) = (entry_in_tree k v l \<or> k = a \<and> v = b \<or> entry_in_tree k v r)"
+using assms 
+by (induct l k v r rule: balance_left.induct) (auto simp: balance_in_tree)
+
+fun
+  balance_right :: "('a,'b) rbt \<Rightarrow> 'a \<Rightarrow> 'b \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
+where
+  "balance_right a k x (Branch R b s y c) = Branch R a k x (Branch B b s y c)" |
+  "balance_right (Branch B a k x b) s y bl = balance (Branch R a k x b) s y bl" |
+  "balance_right (Branch R a k x (Branch B b s y c)) t z bl = Branch R (balance (paint R a) k x b) s y (Branch B c t z bl)" |
+  "balance_right t k x s = Empty"
+
+lemma balance_right_inv2_with_inv1:
+  assumes "inv2 lt" "inv2 rt" "bheight lt = bheight rt + 1" "inv1 lt"
+  shows "inv2 (balance_right lt k v rt) \<and> bheight (balance_right lt k v rt) = bheight lt"
+using assms
+by (induct lt k v rt rule: balance_right.induct) (auto simp: balance_inv2 balance_bheight)
+
+lemma balance_right_inv1: "\<lbrakk>inv1 a; inv1l b; color_of a = B\<rbrakk> \<Longrightarrow> inv1 (balance_right a k x b)"
+by (induct a k x b rule: balance_right.induct) (simp add: balance_inv1)+
+
+lemma balance_right_inv1l: "\<lbrakk> inv1 lt; inv1l rt \<rbrakk> \<Longrightarrow>inv1l (balance_right lt k x rt)"
+by (induct lt k x rt rule: balance_right.induct) (auto simp: balance_inv1)
+
+lemma balance_right_sorted: "\<lbrakk> sorted l; sorted r; tree_less k l; tree_greater k r \<rbrakk> \<Longrightarrow> sorted (balance_right l k v r)"
+apply (induct l k v r rule: balance_right.induct)
+apply (auto simp:balance_sorted)
+apply (unfold tree_less_prop tree_greater_prop)
+by force+
+
+lemma balance_right_tree_greater: 
+  fixes k :: "'a::order"
+  assumes "k \<guillemotleft>| a" "k \<guillemotleft>| b" "k < x" 
+  shows "k \<guillemotleft>| balance_right a x t b"
+using assms by (induct a x t b rule: balance_right.induct) auto
+
+lemma balance_right_tree_less: 
+  fixes k :: "'a::order"
+  assumes "a |\<guillemotleft> k" "b |\<guillemotleft> k" "x < k" 
+  shows "balance_right a x t b |\<guillemotleft> k"
+using assms by (induct a x t b rule: balance_right.induct) auto
+
+lemma balance_right_in_tree:
+  assumes "inv1 l" "inv1l r" "bheight l = bheight r + 1" "inv2 l" "inv2 r"
+  shows "entry_in_tree x y (balance_right l k v r) = (entry_in_tree x y l \<or> x = k \<and> y = v \<or> entry_in_tree x y r)"
+using assms by (induct l k v r rule: balance_right.induct) (auto simp: balance_in_tree)
+
+fun
+  combine :: "('a,'b) rbt \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
+where
+  "combine Empty x = x" 
+| "combine x Empty = x" 
+| "combine (Branch R a k x b) (Branch R c s y d) = (case (combine b c) of
+                                      Branch R b2 t z c2 \<Rightarrow> (Branch R (Branch R a k x b2) t z (Branch R c2 s y d)) |
+                                      bc \<Rightarrow> Branch R a k x (Branch R bc s y d))" 
+| "combine (Branch B a k x b) (Branch B c s y d) = (case (combine b c) of
+                                      Branch R b2 t z c2 \<Rightarrow> Branch R (Branch B a k x b2) t z (Branch B c2 s y d) |
+                                      bc \<Rightarrow> balance_left a k x (Branch B bc s y d))" 
+| "combine a (Branch R b k x c) = Branch R (combine a b) k x c" 
+| "combine (Branch R a k x b) c = Branch R a k x (combine b c)" 
+
+lemma combine_inv2:
+  assumes "inv2 lt" "inv2 rt" "bheight lt = bheight rt"
+  shows "bheight (combine lt rt) = bheight lt" "inv2 (combine lt rt)"
+using assms 
+by (induct lt rt rule: combine.induct) 
+   (auto simp: balance_left_inv2_app split: rbt.splits color.splits)
+
+lemma combine_inv1: 
+  assumes "inv1 lt" "inv1 rt"
+  shows "color_of lt = B \<Longrightarrow> color_of rt = B \<Longrightarrow> inv1 (combine lt rt)"
+         "inv1l (combine lt rt)"
+using assms 
+by (induct lt rt rule: combine.induct)
+   (auto simp: balance_left_inv1 split: rbt.splits color.splits)
+
+lemma combine_tree_greater[simp]: 
+  fixes k :: "'a::linorder"
+  assumes "k \<guillemotleft>| l" "k \<guillemotleft>| r" 
+  shows "k \<guillemotleft>| combine l r"
+using assms 
+by (induct l r rule: combine.induct)
+   (auto simp: balance_left_tree_greater split:rbt.splits color.splits)
+
+lemma combine_tree_less[simp]: 
+  fixes k :: "'a::linorder"
+  assumes "l |\<guillemotleft> k" "r |\<guillemotleft> k" 
+  shows "combine l r |\<guillemotleft> k"
+using assms 
+by (induct l r rule: combine.induct)
+   (auto simp: balance_left_tree_less split:rbt.splits color.splits)
+
+lemma combine_sorted: 
+  fixes k :: "'a::linorder"
+  assumes "sorted l" "sorted r" "l |\<guillemotleft> k" "k \<guillemotleft>| r"
+  shows "sorted (combine l r)"
+using assms proof (induct l r rule: combine.induct)
+  case (3 a x v b c y w d)
+  hence ineqs: "a |\<guillemotleft> x" "x \<guillemotleft>| b" "b |\<guillemotleft> k" "k \<guillemotleft>| c" "c |\<guillemotleft> y" "y \<guillemotleft>| d"
+    by auto
+  with 3
+  show ?case
+    by (cases "combine b c" rule: rbt_cases)
+      (auto, (metis combine_tree_greater combine_tree_less ineqs ineqs tree_less_simps(2) tree_greater_simps(2) tree_greater_trans tree_less_trans)+)
+next
+  case (4 a x v b c y w d)
+  hence "x < k \<and> tree_greater k c" by simp
+  hence "tree_greater x c" by (blast dest: tree_greater_trans)
+  with 4 have 2: "tree_greater x (combine b c)" by (simp add: combine_tree_greater)
+  from 4 have "k < y \<and> tree_less k b" by simp
+  hence "tree_less y b" by (blast dest: tree_less_trans)
+  with 4 have 3: "tree_less y (combine b c)" by (simp add: combine_tree_less)
+  show ?case
+  proof (cases "combine b c" rule: rbt_cases)
+    case Empty
+    from 4 have "x < y \<and> tree_greater y d" by auto
+    hence "tree_greater x d" by (blast dest: tree_greater_trans)
+    with 4 Empty have "sorted a" and "sorted (Branch B Empty y w d)" and "tree_less x a" and "tree_greater x (Branch B Empty y w d)" by auto
+    with Empty show ?thesis by (simp add: balance_left_sorted)
+  next
+    case (Red lta va ka rta)
+    with 2 4 have "x < va \<and> tree_less x a" by simp
+    hence 5: "tree_less va a" by (blast dest: tree_less_trans)
+    from Red 3 4 have "va < y \<and> tree_greater y d" by simp
+    hence "tree_greater va d" by (blast dest: tree_greater_trans)
+    with Red 2 3 4 5 show ?thesis by simp
+  next
+    case (Black lta va ka rta)
+    from 4 have "x < y \<and> tree_greater y d" by auto
+    hence "tree_greater x d" by (blast dest: tree_greater_trans)
+    with Black 2 3 4 have "sorted a" and "sorted (Branch B (combine b c) y w d)" and "tree_less x a" and "tree_greater x (Branch B (combine b c) y w d)" by auto
+    with Black show ?thesis by (simp add: balance_left_sorted)
+  qed
+next
+  case (5 va vb vd vc b x w c)
+  hence "k < x \<and> tree_less k (Branch B va vb vd vc)" by simp
+  hence "tree_less x (Branch B va vb vd vc)" by (blast dest: tree_less_trans)
+  with 5 show ?case by (simp add: combine_tree_less)
+next
+  case (6 a x v b va vb vd vc)
+  hence "x < k \<and> tree_greater k (Branch B va vb vd vc)" by simp
+  hence "tree_greater x (Branch B va vb vd vc)" by (blast dest: tree_greater_trans)
+  with 6 show ?case by (simp add: combine_tree_greater)
+qed simp+
+
+lemma combine_in_tree: 
+  assumes "inv2 l" "inv2 r" "bheight l = bheight r" "inv1 l" "inv1 r"
+  shows "entry_in_tree k v (combine l r) = (entry_in_tree k v l \<or> entry_in_tree k v r)"
+using assms 
+proof (induct l r rule: combine.induct)
+  case (4 _ _ _ b c)
+  hence a: "bheight (combine b c) = bheight b" by (simp add: combine_inv2)
+  from 4 have b: "inv1l (combine b c)" by (simp add: combine_inv1)
+
+  show ?case
+  proof (cases "combine b c" rule: rbt_cases)
+    case Empty
+    with 4 a show ?thesis by (auto simp: balance_left_in_tree)
+  next
+    case (Red lta ka va rta)
+    with 4 show ?thesis by auto
+  next
+    case (Black lta ka va rta)
+    with a b 4  show ?thesis by (auto simp: balance_left_in_tree)
+  qed 
+qed (auto split: rbt.splits color.splits)
+
+fun
+  del_from_left :: "('a\<Colon>linorder) \<Rightarrow> ('a,'b) rbt \<Rightarrow> 'a \<Rightarrow> 'b \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt" and
+  del_from_right :: "('a\<Colon>linorder) \<Rightarrow> ('a,'b) rbt \<Rightarrow> 'a \<Rightarrow> 'b \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt" and
+  del :: "('a\<Colon>linorder) \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
+where
+  "del x Empty = Empty" |
+  "del x (Branch c a y s b) = (if x < y then del_from_left x a y s b else (if x > y then del_from_right x a y s b else combine a b))" |
+  "del_from_left x (Branch B lt z v rt) y s b = balance_left (del x (Branch B lt z v rt)) y s b" |
+  "del_from_left x a y s b = Branch R (del x a) y s b" |
+  "del_from_right x a y s (Branch B lt z v rt) = balance_right a y s (del x (Branch B lt z v rt))" | 
+  "del_from_right x a y s b = Branch R a y s (del x b)"
+
+lemma 
+  assumes "inv2 lt" "inv1 lt"
+  shows
+  "\<lbrakk>inv2 rt; bheight lt = bheight rt; inv1 rt\<rbrakk> \<Longrightarrow>
+  inv2 (del_from_left x lt k v rt) \<and> bheight (del_from_left x lt k v rt) = bheight lt \<and> (color_of lt = B \<and> color_of rt = B \<and> inv1 (del_from_left x lt k v rt) \<or> (color_of lt \<noteq> B \<or> color_of rt \<noteq> B) \<and> inv1l (del_from_left x lt k v rt))"
+  and "\<lbrakk>inv2 rt; bheight lt = bheight rt; inv1 rt\<rbrakk> \<Longrightarrow>
+  inv2 (del_from_right x lt k v rt) \<and> bheight (del_from_right x lt k v rt) = bheight lt \<and> (color_of lt = B \<and> color_of rt = B \<and> inv1 (del_from_right x lt k v rt) \<or> (color_of lt \<noteq> B \<or> color_of rt \<noteq> B) \<and> inv1l (del_from_right x lt k v rt))"
+  and del_inv1_inv2: "inv2 (del x lt) \<and> (color_of lt = R \<and> bheight (del x lt) = bheight lt \<and> inv1 (del x lt) 
+  \<or> color_of lt = B \<and> bheight (del x lt) = bheight lt - 1 \<and> inv1l (del x lt))"
+using assms
+proof (induct x lt k v rt and x lt k v rt and x lt rule: del_from_left_del_from_right_del.induct)
+case (2 y c _ y')
+  have "y = y' \<or> y < y' \<or> y > y'" by auto
+  thus ?case proof (elim disjE)
+    assume "y = y'"
+    with 2 show ?thesis by (cases c) (simp add: combine_inv2 combine_inv1)+
+  next
+    assume "y < y'"
+    with 2 show ?thesis by (cases c) auto
+  next
+    assume "y' < y"
+    with 2 show ?thesis by (cases c) auto
+  qed
+next
+  case (3 y lt z v rta y' ss bb) 
+  thus ?case by (cases "color_of (Branch B lt z v rta) = B \<and> color_of bb = B") (simp add: balance_left_inv2_with_inv1 balance_left_inv1 balance_left_inv1l)+
+next
+  case (5 y a y' ss lt z v rta)
+  thus ?case by (cases "color_of a = B \<and> color_of (Branch B lt z v rta) = B") (simp add: balance_right_inv2_with_inv1 balance_right_inv1 balance_right_inv1l)+
+next
+  case ("6_1" y a y' ss) thus ?case by (cases "color_of a = B \<and> color_of Empty = B") simp+
+qed auto
+
+lemma 
+  del_from_left_tree_less: "\<lbrakk>tree_less v lt; tree_less v rt; k < v\<rbrakk> \<Longrightarrow> tree_less v (del_from_left x lt k y rt)"
+  and del_from_right_tree_less: "\<lbrakk>tree_less v lt; tree_less v rt; k < v\<rbrakk> \<Longrightarrow> tree_less v (del_from_right x lt k y rt)"
+  and del_tree_less: "tree_less v lt \<Longrightarrow> tree_less v (del x lt)"
+by (induct x lt k y rt and x lt k y rt and x lt rule: del_from_left_del_from_right_del.induct) 
+   (auto simp: balance_left_tree_less balance_right_tree_less)
+
+lemma del_from_left_tree_greater: "\<lbrakk>tree_greater v lt; tree_greater v rt; k > v\<rbrakk> \<Longrightarrow> tree_greater v (del_from_left x lt k y rt)"
+  and del_from_right_tree_greater: "\<lbrakk>tree_greater v lt; tree_greater v rt; k > v\<rbrakk> \<Longrightarrow> tree_greater v (del_from_right x lt k y rt)"
+  and del_tree_greater: "tree_greater v lt \<Longrightarrow> tree_greater v (del x lt)"
+by (induct x lt k y rt and x lt k y rt and x lt rule: del_from_left_del_from_right_del.induct)
+   (auto simp: balance_left_tree_greater balance_right_tree_greater)
+
+lemma "\<lbrakk>sorted lt; sorted rt; tree_less k lt; tree_greater k rt\<rbrakk> \<Longrightarrow> sorted (del_from_left x lt k y rt)"
+  and "\<lbrakk>sorted lt; sorted rt; tree_less k lt; tree_greater k rt\<rbrakk> \<Longrightarrow> sorted (del_from_right x lt k y rt)"
+  and del_sorted: "sorted lt \<Longrightarrow> sorted (del x lt)"
+proof (induct x lt k y rt and x lt k y rt and x lt rule: del_from_left_del_from_right_del.induct)
+  case (3 x lta zz v rta yy ss bb)
+  from 3 have "tree_less yy (Branch B lta zz v rta)" by simp
+  hence "tree_less yy (del x (Branch B lta zz v rta))" by (rule del_tree_less)
+  with 3 show ?case by (simp add: balance_left_sorted)
+next
+  case ("4_2" x vaa vbb vdd vc yy ss bb)
+  hence "tree_less yy (Branch R vaa vbb vdd vc)" by simp
+  hence "tree_less yy (del x (Branch R vaa vbb vdd vc))" by (rule del_tree_less)
+  with "4_2" show ?case by simp
+next
+  case (5 x aa yy ss lta zz v rta) 
+  hence "tree_greater yy (Branch B lta zz v rta)" by simp
+  hence "tree_greater yy (del x (Branch B lta zz v rta))" by (rule del_tree_greater)
+  with 5 show ?case by (simp add: balance_right_sorted)
+next
+  case ("6_2" x aa yy ss vaa vbb vdd vc)
+  hence "tree_greater yy (Branch R vaa vbb vdd vc)" by simp
+  hence "tree_greater yy (del x (Branch R vaa vbb vdd vc))" by (rule del_tree_greater)
+  with "6_2" show ?case by simp
+qed (auto simp: combine_sorted)
+
+lemma "\<lbrakk>sorted lt; sorted rt; tree_less kt lt; tree_greater kt rt; inv1 lt; inv1 rt; inv2 lt; inv2 rt; bheight lt = bheight rt; x < kt\<rbrakk> \<Longrightarrow> entry_in_tree k v (del_from_left x lt kt y rt) = (False \<or> (x \<noteq> k \<and> entry_in_tree k v (Branch c lt kt y rt)))"
+  and "\<lbrakk>sorted lt; sorted rt; tree_less kt lt; tree_greater kt rt; inv1 lt; inv1 rt; inv2 lt; inv2 rt; bheight lt = bheight rt; x > kt\<rbrakk> \<Longrightarrow> entry_in_tree k v (del_from_right x lt kt y rt) = (False \<or> (x \<noteq> k \<and> entry_in_tree k v (Branch c lt kt y rt)))"
+  and del_in_tree: "\<lbrakk>sorted t; inv1 t; inv2 t\<rbrakk> \<Longrightarrow> entry_in_tree k v (del x t) = (False \<or> (x \<noteq> k \<and> entry_in_tree k v t))"
+proof (induct x lt kt y rt and x lt kt y rt and x t rule: del_from_left_del_from_right_del.induct)
+  case (2 xx c aa yy ss bb)
+  have "xx = yy \<or> xx < yy \<or> xx > yy" by auto
+  from this 2 show ?case proof (elim disjE)
+    assume "xx = yy"
+    with 2 show ?thesis proof (cases "xx = k")
+      case True
+      from 2 `xx = yy` `xx = k` have "sorted (Branch c aa yy ss bb) \<and> k = yy" by simp
+      hence "\<not> entry_in_tree k v aa" "\<not> entry_in_tree k v bb" by (auto simp: tree_less_nit tree_greater_prop)
+      with `xx = yy` 2 `xx = k` show ?thesis by (simp add: combine_in_tree)
+    qed (simp add: combine_in_tree)
+  qed simp+
+next    
+  case (3 xx lta zz vv rta yy ss bb)
+  def mt[simp]: mt == "Branch B lta zz vv rta"
+  from 3 have "inv2 mt \<and> inv1 mt" by simp
+  hence "inv2 (del xx mt) \<and> (color_of mt = R \<and> bheight (del xx mt) = bheight mt \<and> inv1 (del xx mt) \<or> color_of mt = B \<and> bheight (del xx mt) = bheight mt - 1 \<and> inv1l (del xx mt))" by (blast dest: del_inv1_inv2)
+  with 3 have 4: "entry_in_tree k v (del_from_left xx mt yy ss bb) = (False \<or> xx \<noteq> k \<and> entry_in_tree k v mt \<or> (k = yy \<and> v = ss) \<or> entry_in_tree k v bb)" by (simp add: balance_left_in_tree)
+  thus ?case proof (cases "xx = k")
+    case True
+    from 3 True have "tree_greater yy bb \<and> yy > k" by simp
+    hence "tree_greater k bb" by (blast dest: tree_greater_trans)
+    with 3 4 True show ?thesis by (auto simp: tree_greater_nit)
+  qed auto
+next
+  case ("4_1" xx yy ss bb)
+  show ?case proof (cases "xx = k")
+    case True
+    with "4_1" have "tree_greater yy bb \<and> k < yy" by simp
+    hence "tree_greater k bb" by (blast dest: tree_greater_trans)
+    with "4_1" `xx = k` 
+   have "entry_in_tree k v (Branch R Empty yy ss bb) = entry_in_tree k v Empty" by (auto simp: tree_greater_nit)
+    thus ?thesis by auto
+  qed simp+
+next
+  case ("4_2" xx vaa vbb vdd vc yy ss bb)
+  thus ?case proof (cases "xx = k")
+    case True
+    with "4_2" have "k < yy \<and> tree_greater yy bb" by simp
+    hence "tree_greater k bb" by (blast dest: tree_greater_trans)
+    with True "4_2" show ?thesis by (auto simp: tree_greater_nit)
+  qed auto
+next
+  case (5 xx aa yy ss lta zz vv rta)
+  def mt[simp]: mt == "Branch B lta zz vv rta"
+  from 5 have "inv2 mt \<and> inv1 mt" by simp
+  hence "inv2 (del xx mt) \<and> (color_of mt = R \<and> bheight (del xx mt) = bheight mt \<and> inv1 (del xx mt) \<or> color_of mt = B \<and> bheight (del xx mt) = bheight mt - 1 \<and> inv1l (del xx mt))" by (blast dest: del_inv1_inv2)
+  with 5 have 3: "entry_in_tree k v (del_from_right xx aa yy ss mt) = (entry_in_tree k v aa \<or> (k = yy \<and> v = ss) \<or> False \<or> xx \<noteq> k \<and> entry_in_tree k v mt)" by (simp add: balance_right_in_tree)
+  thus ?case proof (cases "xx = k")
+    case True
+    from 5 True have "tree_less yy aa \<and> yy < k" by simp
+    hence "tree_less k aa" by (blast dest: tree_less_trans)
+    with 3 5 True show ?thesis by (auto simp: tree_less_nit)
+  qed auto
+next
+  case ("6_1" xx aa yy ss)
+  show ?case proof (cases "xx = k")
+    case True
+    with "6_1" have "tree_less yy aa \<and> k > yy" by simp
+    hence "tree_less k aa" by (blast dest: tree_less_trans)
+    with "6_1" `xx = k` show ?thesis by (auto simp: tree_less_nit)
+  qed simp
+next
+  case ("6_2" xx aa yy ss vaa vbb vdd vc)
+  thus ?case proof (cases "xx = k")
+    case True
+    with "6_2" have "k > yy \<and> tree_less yy aa" by simp
+    hence "tree_less k aa" by (blast dest: tree_less_trans)
+    with True "6_2" show ?thesis by (auto simp: tree_less_nit)
+  qed auto
+qed simp
+
+
+definition delete where
+  delete_def: "delete k t = paint B (del k t)"
+
+theorem delete_is_rbt [simp]: assumes "is_rbt t" shows "is_rbt (delete k t)"
+proof -
+  from assms have "inv2 t" and "inv1 t" unfolding is_rbt_def by auto 
+  hence "inv2 (del k t) \<and> (color_of t = R \<and> bheight (del k t) = bheight t \<and> inv1 (del k t) \<or> color_of t = B \<and> bheight (del k t) = bheight t - 1 \<and> inv1l (del k t))" by (rule del_inv1_inv2)
+  hence "inv2 (del k t) \<and> inv1l (del k t)" by (cases "color_of t") auto
+  with assms show ?thesis
+    unfolding is_rbt_def delete_def
+    by (auto intro: paint_sorted del_sorted)
+qed
+
+lemma delete_in_tree: 
+  assumes "is_rbt t" 
+  shows "entry_in_tree k v (delete x t) = (x \<noteq> k \<and> entry_in_tree k v t)"
+  using assms unfolding is_rbt_def delete_def
+  by (auto simp: del_in_tree)
+
+lemma lookup_delete:
+  assumes is_rbt: "is_rbt t"
+  shows "lookup (delete k t) = (lookup t)|`(-{k})"
+proof
+  fix x
+  show "lookup (delete k t) x = (lookup t |` (-{k})) x" 
+  proof (cases "x = k")
+    assume "x = k" 
+    with is_rbt show ?thesis
+      by (cases "lookup (delete k t) k") (auto simp: lookup_in_tree delete_in_tree)
+  next
+    assume "x \<noteq> k"
+    thus ?thesis
+      by auto (metis is_rbt delete_is_rbt delete_in_tree is_rbt_sorted lookup_from_in_tree)
+  qed
+qed
+
+
+subsection {* Union *}
+
+primrec
+  union_with_key :: "('a\<Colon>linorder \<Rightarrow> 'b \<Rightarrow> 'b \<Rightarrow> 'b) \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt \<Rightarrow> ('a,'b) rbt"
+where
+  "union_with_key f t Empty = t"
+| "union_with_key f t (Branch c lt k v rt) = union_with_key f (union_with_key f (insert_with_key f k v t) lt) rt"
+
+lemma unionwk_sorted: "sorted lt \<Longrightarrow> sorted (union_with_key f lt rt)" 
+  by (induct rt arbitrary: lt) (auto simp: insertwk_sorted)
+theorem unionwk_is_rbt[simp]: "is_rbt lt \<Longrightarrow> is_rbt (union_with_key f lt rt)" 
+  by (induct rt arbitrary: lt) (simp add: insertwk_is_rbt)+
+
+definition
+  union_with where
+  "union_with f = union_with_key (\<lambda>_. f)"
+
+theorem unionw_is_rbt: "is_rbt lt \<Longrightarrow> is_rbt (union_with f lt rt)" unfolding union_with_def by simp
+
+definition union where
+  "union = union_with_key (%_ _ rv. rv)"
+
+theorem union_is_rbt: "is_rbt lt \<Longrightarrow> is_rbt (union lt rt)" unfolding union_def by simp
+
+lemma union_Branch[simp]:
+  "union t (Branch c lt k v rt) = union (union (insert k v t) lt) rt"
+  unfolding union_def insert_def
+  by simp
+
+lemma lookup_union:
+  assumes "is_rbt s" "sorted t"
+  shows "lookup (union s t) = lookup s ++ lookup t"
+using assms
+proof (induct t arbitrary: s)
+  case Empty thus ?case by (auto simp: union_def)
+next
+  case (Branch c l k v r s)
+  then have "sorted r" "sorted l" "l |\<guillemotleft> k" "k \<guillemotleft>| r" by auto
+
+  have meq: "lookup s(k \<mapsto> v) ++ lookup l ++ lookup r =
+    lookup s ++
+    (\<lambda>a. if a < k then lookup l a
+    else if k < a then lookup r a else Some v)" (is "?m1 = ?m2")
+  proof (rule ext)
+    fix a
+
+   have "k < a \<or> k = a \<or> k > a" by auto
+    thus "?m1 a = ?m2 a"
+    proof (elim disjE)
+      assume "k < a"
+      with `l |\<guillemotleft> k` have "l |\<guillemotleft> a" by (rule tree_less_trans)
+      with `k < a` show ?thesis
+        by (auto simp: map_add_def split: option.splits)
+    next
+      assume "k = a"
+      with `l |\<guillemotleft> k` `k \<guillemotleft>| r` 
+      show ?thesis by (auto simp: map_add_def)
+    next
+      assume "a < k"
+      from this `k \<guillemotleft>| r` have "a \<guillemotleft>| r" by (rule tree_greater_trans)
+      with `a < k` show ?thesis
+        by (auto simp: map_add_def split: option.splits)
+    qed
+  qed
+
+  from Branch have is_rbt: "is_rbt (RBT_Impl.union (RBT_Impl.insert k v s) l)"
+    by (auto intro: union_is_rbt insert_is_rbt)
+  with Branch have IHs:
+    "lookup (union (union (insert k v s) l) r) = lookup (union (insert k v s) l) ++ lookup r"
+    "lookup (union (insert k v s) l) = lookup (insert k v s) ++ lookup l"
+    by auto
+  
+  with meq show ?case
+    by (auto simp: lookup_insert[OF Branch(3)])
+
+qed
+
+
+subsection {* Modifying existing entries *}
+
+primrec
+  map_entry :: "'a\<Colon>linorder \<Rightarrow> ('b \<Rightarrow> 'b) \<Rightarrow> ('a, 'b) rbt \<Rightarrow> ('a, 'b) rbt"
+where
+  "map_entry k f Empty = Empty"
+| "map_entry k f (Branch c lt x v rt) =
+    (if k < x then Branch c (map_entry k f lt) x v rt
+    else if k > x then (Branch c lt x v (map_entry k f rt))
+    else Branch c lt x (f v) rt)"
+
+lemma map_entry_color_of: "color_of (map_entry k f t) = color_of t" by (induct t) simp+
+lemma map_entry_inv1: "inv1 (map_entry k f t) = inv1 t" by (induct t) (simp add: map_entry_color_of)+
+lemma map_entry_inv2: "inv2 (map_entry k f t) = inv2 t" "bheight (map_entry k f t) = bheight t" by (induct t) simp+
+lemma map_entry_tree_greater: "tree_greater a (map_entry k f t) = tree_greater a t" by (induct t) simp+
+lemma map_entry_tree_less: "tree_less a (map_entry k f t) = tree_less a t" by (induct t) simp+
+lemma map_entry_sorted: "sorted (map_entry k f t) = sorted t"
+  by (induct t) (simp_all add: map_entry_tree_less map_entry_tree_greater)
+
+theorem map_entry_is_rbt [simp]: "is_rbt (map_entry k f t) = is_rbt t" 
+unfolding is_rbt_def by (simp add: map_entry_inv2 map_entry_color_of map_entry_sorted map_entry_inv1 )
+
+theorem lookup_map_entry:
+  "lookup (map_entry k f t) = (lookup t)(k := Option.map f (lookup t k))"
+  by (induct t) (auto split: option.splits simp add: expand_fun_eq)
+
+
+subsection {* Mapping all entries *}
+
+primrec
+  map :: "('a \<Rightarrow> 'b \<Rightarrow> 'c) \<Rightarrow> ('a, 'b) rbt \<Rightarrow> ('a, 'c) rbt"
+where
+  "map f Empty = Empty"
+| "map f (Branch c lt k v rt) = Branch c (map f lt) k (f k v) (map f rt)"
+
+lemma map_entries [simp]: "entries (map f t) = List.map (\<lambda>(k, v). (k, f k v)) (entries t)"
+  by (induct t) auto
+lemma map_keys [simp]: "keys (map f t) = keys t" by (simp add: keys_def split_def)
+lemma map_tree_greater: "tree_greater k (map f t) = tree_greater k t" by (induct t) simp+
+lemma map_tree_less: "tree_less k (map f t) = tree_less k t" by (induct t) simp+
+lemma map_sorted: "sorted (map f t) = sorted t"  by (induct t) (simp add: map_tree_less map_tree_greater)+
+lemma map_color_of: "color_of (map f t) = color_of t" by (induct t) simp+
+lemma map_inv1: "inv1 (map f t) = inv1 t" by (induct t) (simp add: map_color_of)+
+lemma map_inv2: "inv2 (map f t) = inv2 t" "bheight (map f t) = bheight t" by (induct t) simp+
+theorem map_is_rbt [simp]: "is_rbt (map f t) = is_rbt t" 
+unfolding is_rbt_def by (simp add: map_inv1 map_inv2 map_sorted map_color_of)
+
+theorem lookup_map: "lookup (map f t) x = Option.map (f x) (lookup t x)"
+  by (induct t) auto
+
+
+subsection {* Folding over entries *}
+
+definition fold :: "('a \<Rightarrow> 'b \<Rightarrow> 'c \<Rightarrow> 'c) \<Rightarrow> ('a, 'b) rbt \<Rightarrow> 'c \<Rightarrow> 'c" where
+  "fold f t s = foldl (\<lambda>s (k, v). f k v s) s (entries t)"
+
+lemma fold_simps [simp, code]:
+  "fold f Empty = id"
+  "fold f (Branch c lt k v rt) = fold f rt \<circ> f k v \<circ> fold f lt"
+  by (simp_all add: fold_def expand_fun_eq)
+
+
+subsection {* Bulkloading a tree *}
+
+definition bulkload :: "('a \<times> 'b) list \<Rightarrow> ('a\<Colon>linorder, 'b) rbt" where
+  "bulkload xs = foldr (\<lambda>(k, v). insert k v) xs Empty"
+
+lemma bulkload_is_rbt [simp, intro]:
+  "is_rbt (bulkload xs)"
+  unfolding bulkload_def by (induct xs) auto
+
+lemma lookup_bulkload:
+  "lookup (bulkload xs) = map_of xs"
+proof -
+  obtain ys where "ys = rev xs" by simp
+  have "\<And>t. is_rbt t \<Longrightarrow>
+    lookup (foldl (\<lambda>t (k, v). insert k v t) t ys) = lookup t ++ map_of (rev ys)"
+      by (induct ys) (simp_all add: bulkload_def split_def lookup_insert)
+  from this Empty_is_rbt have
+    "lookup (foldl (\<lambda>t (k, v). insert k v t) Empty (rev xs)) = lookup Empty ++ map_of xs"
+     by (simp add: `ys = rev xs`)
+  then show ?thesis by (simp add: bulkload_def foldl_foldr lookup_Empty split_def)
+qed
+
+hide_const (open) Empty insert delete entries keys bulkload lookup map_entry map fold union sorted
+
+end
--- a/src/HOL/Library/Table.thy	Wed Apr 21 10:44:44 2010 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,139 +0,0 @@
-(* Author: Florian Haftmann, TU Muenchen *)
-
-header {* Tables: finite mappings implemented by red-black trees *}
-
-theory Table
-imports Main RBT
-begin
-
-subsection {* Type definition *}
-
-typedef (open) ('a, 'b) table = "{t :: ('a\<Colon>linorder, 'b) rbt. is_rbt t}"
-  morphisms tree_of Table
-proof -
-  have "RBT.Empty \<in> ?table" by simp
-  then show ?thesis ..
-qed
-
-lemma is_rbt_tree_of [simp, intro]:
-  "is_rbt (tree_of t)"
-  using tree_of [of t] by simp
-
-lemma table_eq:
-  "t1 = t2 \<longleftrightarrow> tree_of t1 = tree_of t2"
-  by (simp add: tree_of_inject)
-
-code_abstype Table tree_of
-  by (simp add: tree_of_inverse)
-
-
-subsection {* Primitive operations *}
-
-definition lookup :: "('a\<Colon>linorder, 'b) table \<Rightarrow> 'a \<rightharpoonup> 'b" where
-  [code]: "lookup t = RBT.lookup (tree_of t)"
-
-definition empty :: "('a\<Colon>linorder, 'b) table" where
-  "empty = Table RBT.Empty"
-
-lemma tree_of_empty [code abstract]:
-  "tree_of empty = RBT.Empty"
-  by (simp add: empty_def Table_inverse)
-
-definition update :: "'a\<Colon>linorder \<Rightarrow> 'b \<Rightarrow> ('a, 'b) table \<Rightarrow> ('a, 'b) table" where
-  "update k v t = Table (RBT.insert k v (tree_of t))"
-
-lemma tree_of_update [code abstract]:
-  "tree_of (update k v t) = RBT.insert k v (tree_of t)"
-  by (simp add: update_def Table_inverse)
-
-definition delete :: "'a\<Colon>linorder \<Rightarrow> ('a, 'b) table \<Rightarrow> ('a, 'b) table" where
-  "delete k t = Table (RBT.delete k (tree_of t))"
-
-lemma tree_of_delete [code abstract]:
-  "tree_of (delete k t) = RBT.delete k (tree_of t)"
-  by (simp add: delete_def Table_inverse)
-
-definition entries :: "('a\<Colon>linorder, 'b) table \<Rightarrow> ('a \<times> 'b) list" where
-  [code]: "entries t = RBT.entries (tree_of t)"
-
-definition bulkload :: "('a\<Colon>linorder \<times> 'b) list \<Rightarrow> ('a, 'b) table" where
-  "bulkload xs = Table (RBT.bulkload xs)"
-
-lemma tree_of_bulkload [code abstract]:
-  "tree_of (bulkload xs) = RBT.bulkload xs"
-  by (simp add: bulkload_def Table_inverse)
-
-definition map_entry :: "'a \<Rightarrow> ('b \<Rightarrow> 'b) \<Rightarrow> ('a\<Colon>linorder, 'b) table \<Rightarrow> ('a, 'b) table" where
-  "map_entry k f t = Table (RBT.map_entry k f (tree_of t))"
-
-lemma tree_of_map_entry [code abstract]:
-  "tree_of (map_entry k f t) = RBT.map_entry k f (tree_of t)"
-  by (simp add: map_entry_def Table_inverse)
-
-definition map :: "('a \<Rightarrow> 'b \<Rightarrow> 'b) \<Rightarrow> ('a\<Colon>linorder, 'b) table \<Rightarrow> ('a, 'b) table" where
-  "map f t = Table (RBT.map f (tree_of t))"
-
-lemma tree_of_map [code abstract]:
-  "tree_of (map f t) = RBT.map f (tree_of t)"
-  by (simp add: map_def Table_inverse)
-
-definition fold :: "('a \<Rightarrow> 'b \<Rightarrow> 'c \<Rightarrow> 'c) \<Rightarrow> ('a\<Colon>linorder, 'b) table \<Rightarrow> 'c \<Rightarrow> 'c" where
-  [code]: "fold f t = RBT.fold f (tree_of t)"
-
-
-subsection {* Derived operations *}
-
-definition is_empty :: "('a\<Colon>linorder, 'b) table \<Rightarrow> bool" where
-  [code]: "is_empty t = (case tree_of t of RBT.Empty \<Rightarrow> True | _ \<Rightarrow> False)"
-
-
-subsection {* Abstract lookup properties *}
-
-lemma lookup_Table:
-  "is_rbt t \<Longrightarrow> lookup (Table t) = RBT.lookup t"
-  by (simp add: lookup_def Table_inverse)
-
-lemma lookup_tree_of:
-  "RBT.lookup (tree_of t) = lookup t"
-  by (simp add: lookup_def)
-
-lemma entries_tree_of:
-  "RBT.entries (tree_of t) = entries t"
-  by (simp add: entries_def)
-
-lemma lookup_empty [simp]:
-  "lookup empty = Map.empty"
-  by (simp add: empty_def lookup_Table expand_fun_eq)
-
-lemma lookup_update [simp]:
-  "lookup (update k v t) = (lookup t)(k \<mapsto> v)"
-  by (simp add: update_def lookup_Table lookup_insert lookup_tree_of)
-
-lemma lookup_delete [simp]:
-  "lookup (delete k t) = (lookup t)(k := None)"
-  by (simp add: delete_def lookup_Table lookup_delete lookup_tree_of restrict_complement_singleton_eq)
-
-lemma map_of_entries [simp]:
-  "map_of (entries t) = lookup t"
-  by (simp add: entries_def map_of_entries lookup_tree_of)
-
-lemma lookup_bulkload [simp]:
-  "lookup (bulkload xs) = map_of xs"
-  by (simp add: bulkload_def lookup_Table lookup_bulkload)
-
-lemma lookup_map_entry [simp]:
-  "lookup (map_entry k f t) = (lookup t)(k := Option.map f (lookup t k))"
-  by (simp add: map_entry_def lookup_Table lookup_map_entry lookup_tree_of)
-
-lemma lookup_map [simp]:
-  "lookup (map f t) k = Option.map (f k) (lookup t k)"
-  by (simp add: map_def lookup_Table lookup_map lookup_tree_of)
-
-lemma fold_fold:
-  "fold f t = (\<lambda>s. foldl (\<lambda>s (k, v). f k v s) s (entries t))"
-  by (simp add: fold_def expand_fun_eq RBT.fold_def entries_tree_of)
-
-hide (open) const tree_of lookup empty update delete
-  entries bulkload map_entry map fold
-
-end
--- a/src/HOL/Library/Transitive_Closure_Table.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Library/Transitive_Closure_Table.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -227,8 +227,8 @@
 value "test\<^sup>*\<^sup>* A C"
 value "test\<^sup>*\<^sup>* C A"
 
-hide type ty
-hide const test A B C
+hide_type ty
+hide_const test A B C
 
 end
 
--- a/src/HOL/Library/Tree.thy	Wed Apr 21 10:44:44 2010 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,152 +0,0 @@
-(* Author: Florian Haftmann, TU Muenchen *)
-
-header {* Trees implementing mappings. *}
-
-theory Tree
-imports Mapping
-begin
-
-subsection {* Type definition and operations *}
-
-datatype ('a, 'b) tree = Empty
-  | Branch 'b 'a "('a, 'b) tree" "('a, 'b) tree"
-
-primrec lookup :: "('a\<Colon>linorder, 'b) tree \<Rightarrow> 'a \<rightharpoonup> 'b" where
-  "lookup Empty = (\<lambda>_. None)"
-  | "lookup (Branch v k l r) = (\<lambda>k'. if k' = k then Some v
-       else if k' \<le> k then lookup l k' else lookup r k')"
-
-primrec update :: "'a\<Colon>linorder \<Rightarrow> 'b \<Rightarrow> ('a, 'b) tree \<Rightarrow> ('a, 'b) tree" where
-  "update k v Empty = Branch v k Empty Empty"
-  | "update k' v' (Branch v k l r) = (if k' = k then
-      Branch v' k l r else if k' \<le> k
-      then Branch v k (update k' v' l) r
-      else Branch v k l (update k' v' r))"
-
-primrec keys :: "('a\<Colon>linorder, 'b) tree \<Rightarrow> 'a list" where
-  "keys Empty = []"
-  | "keys (Branch _ k l r) = k # keys l @ keys r"
-
-definition size :: "('a\<Colon>linorder, 'b) tree \<Rightarrow> nat" where
-  "size t = length (filter (\<lambda>x. x \<noteq> None) (map (lookup t) (remdups (keys t))))"
-
-fun bulkload :: "'a list \<Rightarrow> ('a \<Rightarrow> 'b) \<Rightarrow> ('a\<Colon>linorder, 'b) tree" where
-  [simp del]: "bulkload ks f = (case ks of [] \<Rightarrow> Empty | _ \<Rightarrow> let
-     mid = length ks div 2;
-     ys = take mid ks;
-     x = ks ! mid;
-     zs = drop (Suc mid) ks
-   in Branch (f x) x (bulkload ys f) (bulkload zs f))"
-
-
-subsection {* Properties *}
-
-lemma dom_lookup:
-  "dom (Tree.lookup t) = set (filter (\<lambda>k. lookup t k \<noteq> None) (remdups (keys t)))"
-proof -
-  have "dom (Tree.lookup t) = set (filter (\<lambda>k. lookup t k \<noteq> None) (keys t))"
-  by (induct t) (auto simp add: dom_if)
-  also have "\<dots> = set (filter (\<lambda>k. lookup t k \<noteq> None) (remdups (keys t)))"
-    by simp
-  finally show ?thesis .
-qed
-
-lemma lookup_finite:
-  "finite (dom (lookup t))"
-  unfolding dom_lookup by simp
-
-lemma lookup_update:
-  "lookup (update k v t) = (lookup t)(k \<mapsto> v)"
-  by (induct t) (simp_all add: expand_fun_eq)
-
-lemma lookup_bulkload:
-  "sorted ks \<Longrightarrow> lookup (bulkload ks f) = (Some o f) |` set ks"
-proof (induct ks f rule: bulkload.induct)
-  case (1 ks f) show ?case proof (cases ks)
-    case Nil then show ?thesis by (simp add: bulkload.simps)
-  next
-    case (Cons w ws)
-    then have case_simp: "\<And>w v::('a, 'b) tree. (case ks of [] \<Rightarrow> v | _ \<Rightarrow> w) = w"
-      by (cases ks) auto
-    from Cons have "ks \<noteq> []" by simp
-    then have "0 < length ks" by simp
-    let ?mid = "length ks div 2"
-    let ?ys = "take ?mid ks"
-    let ?x = "ks ! ?mid"
-    let ?zs = "drop (Suc ?mid) ks"
-    from `ks \<noteq> []` have ks_split: "ks = ?ys @ [?x] @ ?zs"
-      by (simp add: id_take_nth_drop)
-    then have in_ks: "\<And>x. x \<in> set ks \<longleftrightarrow> x \<in> set (?ys @ [?x] @ ?zs)"
-      by simp
-    with ks_split have ys_x: "\<And>y. y \<in> set ?ys \<Longrightarrow> y \<le> ?x"
-      and x_zs: "\<And>z. z \<in> set ?zs \<Longrightarrow> ?x \<le> z"
-    using `sorted ks` sorted_append [of "?ys" "[?x] @ ?zs"] sorted_append [of "[?x]" "?zs"]
-      by simp_all
-    have ys: "lookup (bulkload ?ys f) = (Some o f) |` set ?ys"
-      by (rule "1.hyps"(1)) (auto intro: Cons sorted_take `sorted ks`)
-    have zs: "lookup (bulkload ?zs f) = (Some o f) |` set ?zs"
-      by (rule "1.hyps"(2)) (auto intro: Cons sorted_drop `sorted ks`)
-    show ?thesis using `0 < length ks`
-      by (simp add: bulkload.simps)
-        (auto simp add: restrict_map_def in_ks case_simp Let_def ys zs expand_fun_eq
-           dest: in_set_takeD in_set_dropD ys_x x_zs)
-  qed
-qed
-
-
-subsection {* Trees as mappings *}
-
-definition Tree :: "('a\<Colon>linorder, 'b) tree \<Rightarrow> ('a, 'b) mapping" where
-  "Tree t = Mapping (Tree.lookup t)"
-
-lemma [code, code del]:
-  "(eq_class.eq :: (_, _) mapping \<Rightarrow> _) = eq_class.eq" ..
-
-lemma [code, code del]:
-  "Mapping.delete k m = Mapping.delete k m" ..
-
-code_datatype Tree
-
-lemma empty_Tree [code]:
-  "Mapping.empty = Tree Empty"
-  by (simp add: Tree_def Mapping.empty_def)
-
-lemma lookup_Tree [code]:
-  "Mapping.lookup (Tree t) = lookup t"
-  by (simp add: Tree_def)
-
-lemma is_empty_Tree [code]:
-  "Mapping.is_empty (Tree Empty) \<longleftrightarrow> True"
-  "Mapping.is_empty (Tree (Branch v k l r)) \<longleftrightarrow> False"
-  by (simp_all only: is_empty_def lookup_Tree dom_lookup) auto
-
-lemma update_Tree [code]:
-  "Mapping.update k v (Tree t) = Tree (update k v t)"
-  by (simp add: Tree_def lookup_update)
-
-lemma [code, code del]:
-  "Mapping.ordered_keys = Mapping.ordered_keys " ..
-
-lemma keys_Tree [code]:
-  "Mapping.keys (Tree t) = set (filter (\<lambda>k. lookup t k \<noteq> None) (remdups (keys t)))"
-  by (simp add: Mapping.keys_def lookup_Tree dom_lookup)
-
-lemma size_Tree [code]:
-  "Mapping.size (Tree t) = size t"
-proof -
-  have "card (dom (Tree.lookup t)) = length (filter (\<lambda>x. x \<noteq> None) (map (lookup t) (remdups (keys t))))"
-    unfolding dom_lookup by (subst distinct_card) (auto simp add: comp_def)
-  then show ?thesis by (auto simp add: Tree_def Mapping.size_def size_def)
-qed
-
-lemma tabulate_Tree [code]:
-  "Mapping.tabulate ks f = Tree (bulkload (sort ks) f)"
-proof -
-  have "Mapping.lookup (Mapping.tabulate ks f) = Mapping.lookup (Tree (bulkload (sort ks) f))"
-    by (simp add: lookup_bulkload lookup_Tree)
-  then show ?thesis by (simp only: lookup_inject)
-qed
-
-hide (open) const Empty Branch lookup update keys size bulkload Tree
-
-end
--- a/src/HOL/List.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/List.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -172,7 +172,8 @@
   insert :: "'a \<Rightarrow> 'a list \<Rightarrow> 'a list" where
   "insert x xs = (if x \<in> set xs then xs else x # xs)"
 
-hide (open) const insert hide (open) fact insert_def
+hide_const (open) insert
+hide_fact (open) insert_def
 
 primrec
   remove1 :: "'a \<Rightarrow> 'a list \<Rightarrow> 'a list" where
@@ -513,6 +514,17 @@
     (cases zs, simp_all)
 qed
 
+lemma list_induct4 [consumes 3, case_names Nil Cons]:
+  "length xs = length ys \<Longrightarrow> length ys = length zs \<Longrightarrow> length zs = length ws \<Longrightarrow>
+   P [] [] [] [] \<Longrightarrow> (\<And>x xs y ys z zs w ws. length xs = length ys \<Longrightarrow>
+   length ys = length zs \<Longrightarrow> length zs = length ws \<Longrightarrow> P xs ys zs ws \<Longrightarrow>
+   P (x#xs) (y#ys) (z#zs) (w#ws)) \<Longrightarrow> P xs ys zs ws"
+proof (induct xs arbitrary: ys zs ws)
+  case Nil then show ?case by simp
+next
+  case (Cons x xs ys zs ws) then show ?case by ((cases ys, simp_all), (cases zs,simp_all)) (cases ws, simp_all)
+qed
+
 lemma list_induct2': 
   "\<lbrakk> P [] [];
   \<And>x xs. P (x#xs) [];
@@ -1931,6 +1943,12 @@
 
 declare zip_Cons [simp del]
 
+lemma [code]:
+  "zip [] ys = []"
+  "zip xs [] = []"
+  "zip (x # xs) (y # ys) = (x, y) # zip xs ys"
+  by (fact zip_Nil zip.simps(1) zip_Cons_Cons)+
+
 lemma zip_Cons1:
  "zip (x#xs) ys = (case ys of [] \<Rightarrow> [] | y#ys \<Rightarrow> (x,y)#zip xs ys)"
 by(auto split:list.split)
@@ -4573,7 +4591,7 @@
 
 declare set_map [symmetric, code_unfold]
 
-hide (open) const length_unique
+hide_const (open) length_unique
 
 
 text {* Code for bounded quantification and summation over nats. *}
--- a/src/HOL/Mirabelle/Tools/mirabelle_sledgehammer.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Mirabelle/Tools/mirabelle_sledgehammer.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -391,7 +391,7 @@
       |> Option.map (fst o read_int o explode)
       |> the_default 5
     val params = default_params thy [("minimize_timeout", Int.toString timeout)]
-    val minimize = minimize_theorems params linear_minimize prover prover_name 1
+    val minimize = minimize_theorems params prover prover_name 1
     val _ = log separator
   in
     case minimize st (these (!named_thms)) of
--- a/src/HOL/NSA/Filter.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/NSA/Filter.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -403,6 +403,6 @@
 
 lemmas freeultrafilter_Ex = UFT.freeultrafilter_ex [OF UFT.intro]
 
-hide (open) const filter
+hide_const (open) filter
 
 end
--- a/src/HOL/Nat.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Nat.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -175,7 +175,7 @@
 
 end
 
-hide (open) fact add_0 add_0_right diff_0
+hide_fact (open) add_0 add_0_right diff_0
 
 instantiation nat :: comm_semiring_1_cancel
 begin
@@ -1212,7 +1212,7 @@
   "funpow (Suc n) f = f o funpow n f"
   unfolding funpow_code_def by simp_all
 
-hide (open) const funpow
+hide_const (open) funpow
 
 lemma funpow_add:
   "f ^^ (m + n) = f ^^ m \<circ> f ^^ n"
@@ -1504,7 +1504,7 @@
 lemma diff_diff_eq: "[| k \<le> m;  k \<le> (n::nat) |] ==> ((m-k) - (n-k)) = (m-n)"
 by (simp split add: nat_diff_split)
 
-hide (open) fact diff_diff_eq
+hide_fact (open) diff_diff_eq
 
 lemma eq_diff_iff: "[| k \<le> m;  k \<le> (n::nat) |] ==> (m-k = n-k) = (m=n)"
 by (auto split add: nat_diff_split)
--- a/src/HOL/New_DSequence.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/New_DSequence.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -91,12 +91,12 @@
     None => Lazy_Sequence.hb_single ()
   | Some ((), xq) => Lazy_Sequence.empty)"
 
-hide (open) type pos_dseq neg_dseq
+hide_type (open) pos_dseq neg_dseq
 
-hide (open) const 
+hide_const (open)
   pos_empty pos_single pos_bind pos_union pos_if_seq pos_iterate_upto pos_not_seq pos_map
   neg_empty neg_single neg_bind neg_union neg_if_seq neg_iterate_upto neg_not_seq neg_map
-hide (open) fact
+hide_fact (open)
   pos_empty_def pos_single_def pos_bind_def pos_union_def pos_if_seq_def pos_iterate_upto_def pos_not_seq_def pos_map_def
   neg_empty_def neg_single_def neg_bind_def neg_union_def neg_if_seq_def neg_iterate_upto_def neg_not_seq_def neg_map_def
 
--- a/src/HOL/New_Random_Sequence.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/New_Random_Sequence.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -91,16 +91,16 @@
 where
   "neg_map f P = neg_bind P (neg_single o f)"
 (*
-hide const DSequence.empty DSequence.single DSequence.eval
+hide_const DSequence.empty DSequence.single DSequence.eval
   DSequence.map_seq DSequence.bind DSequence.union DSequence.if_seq DSequence.not_seq
   DSequence.map
 *)
 
-hide (open) const
+hide_const (open)
   pos_empty pos_single pos_bind pos_union pos_if_random_dseq pos_iterate_upto pos_not_random_dseq iter Random pos_map
   neg_empty neg_single neg_bind neg_union neg_if_random_dseq neg_iterate_upto neg_not_random_dseq neg_map
-hide type New_DSequence.pos_dseq New_DSequence.neg_dseq pos_random_dseq neg_random_dseq
+hide_type New_DSequence.pos_dseq New_DSequence.neg_dseq pos_random_dseq neg_random_dseq
 (* FIXME: hide facts *)
-(*hide (open) fact empty_def single_def bind_def union_def if_random_dseq_def not_random_dseq_def Random.simps map_def*)
+(*hide_fact (open) empty_def single_def bind_def union_def if_random_dseq_def not_random_dseq_def Random.simps map_def*)
 
 end
\ No newline at end of file
--- a/src/HOL/Nitpick.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Nitpick.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -237,15 +237,15 @@
 
 setup {* Nitpick_Isar.setup *}
 
-hide (open) const unknown is_unknown undefined_fast_The undefined_fast_Eps bisim 
+hide_const (open) unknown is_unknown undefined_fast_The undefined_fast_Eps bisim 
     bisim_iterator_max Quot safe_The safe_Eps FinFun FunBox PairBox Word refl'
     wf' wf_wfrec wf_wfrec' wfrec' card' setsum' fold_graph' nat_gcd nat_lcm
     int_gcd int_lcm Frac Abs_Frac Rep_Frac zero_frac one_frac num denom
     norm_frac frac plus_frac times_frac uminus_frac number_of_frac inverse_frac
     less_eq_frac of_frac
-hide (open) type bisim_iterator fin_fun fun_box pair_box unsigned_bit signed_bit
+hide_type (open) bisim_iterator fin_fun fun_box pair_box unsigned_bit signed_bit
     word
-hide (open) fact If_def Ex1_def rtrancl_def rtranclp_def tranclp_def refl'_def
+hide_fact (open) If_def Ex1_def rtrancl_def rtranclp_def tranclp_def refl'_def
     wf'_def wf_wfrec'_def wfrec'_def card'_def setsum'_def fold_graph'_def
     The_psimp Eps_psimp unit_case_def nat_case_def list_size_simp nat_gcd_def
     nat_lcm_def int_gcd_def int_lcm_def Frac_def zero_frac_def one_frac_def
--- a/src/HOL/Option.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Option.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -82,7 +82,7 @@
   by (rule ext) (simp split: sum.split)
 
 
-hide (open) const set map
+hide_const (open) set map
 
 subsubsection {* Code generator setup *}
 
@@ -102,7 +102,7 @@
   "eq_class.eq x None \<longleftrightarrow> is_none x"
   by (simp add: eq is_none_none)
 
-hide (open) const is_none
+hide_const (open) is_none
 
 code_type option
   (SML "_ option")
--- a/src/HOL/Predicate.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Predicate.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -934,8 +934,8 @@
   bot ("\<bottom>") and
   bind (infixl "\<guillemotright>=" 70)
 
-hide (open) type pred seq
-hide (open) const Pred eval single bind is_empty singleton if_pred not_pred holds
+hide_type (open) pred seq
+hide_const (open) Pred eval single bind is_empty singleton if_pred not_pred holds
   Empty Insert Join Seq member pred_of_seq "apply" adjunct null the_only eq map not_unique the
 
 end
--- a/src/HOL/Predicate_Compile_Examples/Predicate_Compile_Examples.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Predicate_Compile_Examples/Predicate_Compile_Examples.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -1168,7 +1168,7 @@
 
 code_pred (expected_modes: o => bool, i => bool) S\<^isub>4p .
 
-hide const a b
+hide_const a b
 
 subsection {* Lambda *}
 
--- a/src/HOL/Predicate_Compile_Examples/Predicate_Compile_Quickcheck_Examples.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Predicate_Compile_Examples/Predicate_Compile_Quickcheck_Examples.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -150,7 +150,7 @@
 quickcheck[generator = predicate_compile_ff_nofs, size=5, iterations=1]
 oops
 
-hide const a b
+hide_const a b
 
 subsection {* Lexicographic order *}
 (* TODO *)
--- a/src/HOL/Product_Type.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Product_Type.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -637,7 +637,7 @@
 use "Tools/split_rule.ML"
 setup Split_Rule.setup
 
-hide const internal_split
+hide_const internal_split
 
 
 lemmas prod_caseI = prod.cases [THEN iffD2, standard]
--- a/src/HOL/Quickcheck.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Quickcheck.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -203,9 +203,9 @@
 definition map :: "('a \<Rightarrow> 'b) \<Rightarrow> ('a randompred \<Rightarrow> 'b randompred)"
   where "map f P = bind P (single o f)"
 
-hide (open) fact iter'.simps iter_def empty_def single_def bind_def union_def if_randompred_def iterate_upto_def not_randompred_def Random_def map_def
-hide (open) type randompred
-hide (open) const random collapse beyond random_fun_aux random_fun_lift
+hide_fact (open) iter'.simps iter_def empty_def single_def bind_def union_def if_randompred_def iterate_upto_def not_randompred_def Random_def map_def
+hide_type (open) randompred
+hide_const (open) random collapse beyond random_fun_aux random_fun_lift
   iter' iter empty single bind union if_randompred iterate_upto not_randompred Random map
 
 end
--- a/src/HOL/Quotient.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Quotient.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -585,19 +585,15 @@
   assumes q1: "Quotient R1 Abs1 Rep1"
   and     q2: "Quotient R2 Abs2 Rep2"
   and     q3: "Quotient R3 Abs3 Rep3"
-  shows "(Rep1 ---> Abs3) (((Abs2 ---> Rep3) f) o ((Abs1 ---> Rep2) g)) = f o g"
+  shows "((Abs2 ---> Rep3) ---> (Abs1 ---> Rep2) ---> (Rep1 ---> Abs3)) op \<circ> = op \<circ>"
+  and   "(id ---> (Abs1 ---> id) ---> Rep1 ---> id) op \<circ> = op \<circ>"
   using Quotient_abs_rep[OF q1] Quotient_abs_rep[OF q2] Quotient_abs_rep[OF q3]
-  unfolding o_def expand_fun_eq by simp
+  unfolding o_def expand_fun_eq by simp_all
 
 lemma o_rsp:
-  assumes q1: "Quotient R1 Abs1 Rep1"
-  and     q2: "Quotient R2 Abs2 Rep2"
-  and     q3: "Quotient R3 Abs3 Rep3"
-  and     a1: "(R2 ===> R3) f1 f2"
-  and     a2: "(R1 ===> R2) g1 g2"
-  shows "(R1 ===> R3) (f1 o g1) (f2 o g2)"
-  using a1 a2 unfolding o_def expand_fun_eq
-  by (auto)
+  "((R2 ===> R3) ===> (R1 ===> R2) ===> (R1 ===> R3)) op \<circ> op \<circ>"
+  "(op = ===> (R1 ===> op =) ===> R1 ===> op =) op \<circ> op \<circ>"
+  unfolding fun_rel_def o_def expand_fun_eq by auto
 
 lemma cond_prs:
   assumes a: "Quotient R absf repf"
@@ -606,15 +602,14 @@
 
 lemma if_prs:
   assumes q: "Quotient R Abs Rep"
-  shows "Abs (If a (Rep b) (Rep c)) = If a b c"
-  using Quotient_abs_rep[OF q] by auto
+  shows "(id ---> Rep ---> Rep ---> Abs) If = If"
+  using Quotient_abs_rep[OF q]
+  by (auto simp add: expand_fun_eq)
 
-(* q not used *)
 lemma if_rsp:
   assumes q: "Quotient R Abs Rep"
-  and     a: "a1 = a2" "R b1 b2" "R c1 c2"
-  shows "R (If a1 b1 c1) (If a2 b2 c2)"
-  using a by auto
+  shows "(op = ===> R ===> R ===> R) If If"
+  by auto
 
 lemma let_prs:
   assumes q1: "Quotient R1 Abs1 Rep1"
@@ -717,7 +712,8 @@
 declare [[map "fun" = (fun_map, fun_rel)]]
 
 lemmas [quot_thm] = fun_quotient
-lemmas [quot_respect] = quot_rel_rsp
+lemmas [quot_respect] = quot_rel_rsp if_rsp o_rsp
+lemmas [quot_preserve] = if_prs o_prs
 lemmas [quot_equiv] = identity_equivp
 
 
@@ -747,7 +743,7 @@
   about the lifted theorem in a tactic.
 *}
 definition
-  "Quot_True x \<equiv> True"
+  "Quot_True (x :: 'a) \<equiv> True"
 
 lemma
   shows QT_all: "Quot_True (All P) \<Longrightarrow> Quot_True P"
--- a/src/HOL/Random.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Random.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -174,10 +174,10 @@
 end;
 *}
 
-hide (open) type seed
-hide (open) const inc_shift minus_shift log "next" split_seed
+hide_type (open) seed
+hide_const (open) inc_shift minus_shift log "next" split_seed
   iterate range select pick select_weight
-hide (open) fact range_def
+hide_fact (open) range_def
 
 no_notation fcomp (infixl "o>" 60)
 no_notation scomp (infixl "o\<rightarrow>" 60)
--- a/src/HOL/Random_Sequence.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Random_Sequence.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -48,14 +48,14 @@
   "map f P = bind P (single o f)"
 
 (*
-hide const DSequence.empty DSequence.single DSequence.eval
+hide_const DSequence.empty DSequence.single DSequence.eval
   DSequence.map_seq DSequence.bind DSequence.union DSequence.if_seq DSequence.not_seq
   DSequence.map
 *)
 
-hide (open) const empty single bind union if_random_dseq not_random_dseq Random map
+hide_const (open) empty single bind union if_random_dseq not_random_dseq Random map
 
-hide type DSequence.dseq random_dseq
-hide (open) fact empty_def single_def bind_def union_def if_random_dseq_def not_random_dseq_def Random.simps map_def
+hide_type DSequence.dseq random_dseq
+hide_fact (open) empty_def single_def bind_def union_def if_random_dseq_def not_random_dseq_def Random.simps map_def
 
 end
\ No newline at end of file
--- a/src/HOL/Rat.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Rat.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -1019,11 +1019,9 @@
 definition Frct :: "int \<times> int \<Rightarrow> rat" where
   [simp]: "Frct p = Fract (fst p) (snd p)"
 
-code_abstype Frct quotient_of
-proof (rule eq_reflection)
-  fix r :: rat
-  show "Frct (quotient_of r) = r" by (cases r) (auto intro: quotient_of_eq)
-qed
+lemma [code abstype]:
+  "Frct (quotient_of q) = q"
+  by (cases q) (auto intro: quotient_of_eq)
 
 lemma Frct_code_post [code_post]:
   "Frct (0, k) = 0"
--- a/src/HOL/Record.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Record.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -455,7 +455,7 @@
 use "Tools/record.ML"
 setup Record.setup
 
-hide (open) const Tuple_Isomorphism repr abst iso_tuple_fst iso_tuple_snd
+hide_const (open) Tuple_Isomorphism repr abst iso_tuple_fst iso_tuple_snd
   iso_tuple_fst_update iso_tuple_snd_update iso_tuple_cons
   iso_tuple_surjective_proof_assist iso_tuple_update_accessor_cong_assist
   iso_tuple_update_accessor_eq_assist tuple_iso_tuple
--- a/src/HOL/Statespace/state_fun.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Statespace/state_fun.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -193,7 +193,7 @@
 
              (* mk_updterm returns 
               *  - (orig-term-skeleton,simplified-term-skeleton, vars, b)
-              *     where boolean b tells if a simplification has occured.
+              *     where boolean b tells if a simplification has occurred.
                     "orig-term-skeleton = simplified-term-skeleton" is
               *     the desired simplification rule.
               * The algorithm first walks down the updates to the seed-state while
--- a/src/HOL/Statespace/state_space.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Statespace/state_space.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -478,6 +478,21 @@
     Type (name, Ts) => (Ts, name)
   | T => error ("Bad parent statespace specification: " ^ Syntax.string_of_typ ctxt T));
 
+fun read_typ ctxt raw_T env =
+  let
+    val ctxt' = fold (Variable.declare_typ o TFree) env ctxt;
+    val T = Syntax.read_typ ctxt' raw_T;
+    val env' = OldTerm.add_typ_tfrees (T, env);
+  in (T, env') end;
+
+fun cert_typ ctxt raw_T env =
+  let
+    val thy = ProofContext.theory_of ctxt;
+    val T = Type.no_tvars (Sign.certify_typ thy raw_T)
+      handle TYPE (msg, _, _) => error msg;
+    val env' = OldTerm.add_typ_tfrees (T, env);
+  in (T, env') end;
+
 fun gen_define_statespace prep_typ state_space args name parents comps thy =
   let (* - args distinct
          - only args may occur in comps and parent-instantiations
@@ -500,7 +515,7 @@
 
         val (Ts',env') = fold_map (prep_typ ctxt) Ts env
             handle ERROR msg => cat_error msg
-                    ("The error(s) above occured in parent statespace specification "
+                    ("The error(s) above occurred in parent statespace specification "
                     ^ quote pname);
         val err_insts = if length args <> length Ts' then
             ["number of type instantiation(s) does not match arguments of parent statespace "
@@ -539,7 +554,7 @@
 
     fun prep_comp (n,T) env =
       let val (T', env') = prep_typ ctxt T env handle ERROR msg =>
-       cat_error msg ("The error(s) above occured in component " ^ quote n)
+       cat_error msg ("The error(s) above occurred in component " ^ quote n)
       in ((n,T'), env') end;
 
     val (comps',env') = fold_map prep_comp comps env;
@@ -579,8 +594,8 @@
   end
   handle ERROR msg => cat_error msg ("Failed to define statespace " ^ quote name);
 
-val define_statespace = gen_define_statespace Record.read_typ NONE;
-val define_statespace_i = gen_define_statespace Record.cert_typ;
+val define_statespace = gen_define_statespace read_typ NONE;
+val define_statespace_i = gen_define_statespace cert_typ;
 
 
 (*** parse/print - translations ***)
--- a/src/HOL/String.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/String.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -217,6 +217,6 @@
 in Codegen.add_codegen "char_codegen" char_codegen end
 *}
 
-hide (open) type literal
+hide_type (open) literal
 
 end
\ No newline at end of file
--- a/src/HOL/Sum_Type.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Sum_Type.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -187,6 +187,6 @@
   show "u \<in> UNIV <+> UNIV \<longleftrightarrow> u \<in> UNIV" by (cases u) auto
 qed
 
-hide (open) const Suml Sumr Projl Projr
+hide_const (open) Suml Sumr Projl Projr
 
 end
--- a/src/HOL/Tools/ATP_Manager/atp_manager.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/ATP_Manager/atp_manager.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -12,12 +12,14 @@
   type params =
     {debug: bool,
      verbose: bool,
+     overlord: bool,
      atps: string list,
      full_types: bool,
+     explicit_apply: bool,
      respect_no_atp: bool,
      relevance_threshold: real,
      convergence: real,
-     theory_const: bool option,
+     theory_relevant: bool option,
      higher_order: bool option,
      follow_defs: bool,
      isar_proof: bool,
@@ -56,6 +58,7 @@
 structure ATP_Manager : ATP_MANAGER =
 struct
 
+open Sledgehammer_Util
 open Sledgehammer_Fact_Filter
 open Sledgehammer_Proof_Reconstruct
 
@@ -64,12 +67,14 @@
 type params =
   {debug: bool,
    verbose: bool,
+   overlord: bool,
    atps: string list,
    full_types: bool,
+   explicit_apply: bool,
    respect_no_atp: bool,
    relevance_threshold: real,
    convergence: real,
-   theory_const: bool option,
+   theory_relevant: bool option,
    higher_order: bool option,
    follow_defs: bool,
    isar_proof: bool,
@@ -158,14 +163,22 @@
 
 (* unregister ATP thread *)
 
-fun unregister message thread = Synchronized.change global_state
+fun unregister ({verbose, ...} : params) message thread =
+  Synchronized.change global_state
   (fn state as {manager, timeout_heap, active, cancelling, messages, store} =>
     (case lookup_thread active thread of
-      SOME (_, _, description) =>
+      SOME (birth_time, _, description) =>
         let
           val active' = delete_thread thread active;
-          val cancelling' = (thread, (Time.now (), description)) :: cancelling;
-          val message' = description ^ "\n" ^ message;
+          val now = Time.now ()
+          val cancelling' = (thread, (now, description)) :: cancelling;
+          val message' =
+            description ^ "\n" ^ message ^
+            (if verbose then
+               "Total time: " ^ Int.toString (Time.toMilliseconds
+                                          (Time.- (now, birth_time))) ^ " ms.\n"
+             else
+               "")
           val messages' = message' :: messages;
           val store' = message' ::
             (if length store <= message_store_limit then store
@@ -179,16 +192,23 @@
 val min_wait_time = Time.fromMilliseconds 300;
 val max_wait_time = Time.fromSeconds 10;
 
+(* This is a workaround for Proof General's off-by-a-few sendback display bug,
+   whereby "pr" in "proof" is not highlighted. *)
+val break_into_chunks =
+  map (replace_all "\n\n" "\000") #> maps (space_explode "\000")
+
 fun print_new_messages () =
-  let val msgs = Synchronized.change_result global_state
-    (fn {manager, timeout_heap, active, cancelling, messages, store} =>
-      (messages, make_state manager timeout_heap active cancelling [] store))
-  in
-    if null msgs then ()
-    else priority ("Sledgehammer: " ^ space_implode "\n\n" msgs)
-  end;
+  case Synchronized.change_result global_state
+         (fn {manager, timeout_heap, active, cancelling, messages, store} =>
+             (messages, make_state manager timeout_heap active cancelling []
+                                   store)) of
+    [] => ()
+  | msgs =>
+    msgs |> break_into_chunks
+         |> (fn msg :: msgs => "Sledgehammer: " ^ msg :: msgs)
+         |> List.app priority
 
-fun check_thread_manager () = Synchronized.change global_state
+fun check_thread_manager params = Synchronized.change global_state
   (fn state as {manager, timeout_heap, active, cancelling, messages, store} =>
     if (case manager of SOME thread => Thread.isActive thread | NONE => false) then state
     else let val manager = SOME (Toplevel.thread false (fn () =>
@@ -221,7 +241,7 @@
         do
           (Synchronized.timed_access global_state (SOME o time_limit o #timeout_heap) action
             |> these
-            |> List.app (unregister "Interrupted (reached timeout)");
+            |> List.app (unregister params "Timed out.");
             print_new_messages ();
             (*give threads some time to respond to interrupt*)
             OS.Process.sleep min_wait_time)
@@ -231,7 +251,7 @@
 
 (* register ATP thread *)
 
-fun register birth_time death_time (thread, desc) =
+fun register params birth_time death_time (thread, desc) =
  (Synchronized.change global_state
     (fn {manager, timeout_heap, active, cancelling, messages, store} =>
       let
@@ -239,7 +259,7 @@
         val active' = update_thread (thread, (birth_time, death_time, desc)) active;
         val state' = make_state manager timeout_heap' active' cancelling messages store;
       in state' end);
-  check_thread_manager ());
+  check_thread_manager params);
 
 
 
@@ -279,7 +299,7 @@
         space_implode "\n\n"
           ("Trying to interrupt the following ATPs:" :: map cancelling_info cancelling);
 
-  in writeln (running ^ "\n" ^ interrupting) end;
+  in priority (running ^ "\n" ^ interrupting) end;
 
 fun messages opt_limit =
   let
@@ -288,15 +308,14 @@
     val header =
       "Recent ATP messages" ^
         (if length store <= limit then ":" else " (" ^ string_of_int limit ^ " displayed):");
-  in writeln (space_implode "\n\n" (header :: #1 (chop limit store))) end;
-
+  in List.app priority (header :: break_into_chunks (#1 (chop limit store))) end
 
 
 (** The Sledgehammer **)
 
 (* named provers *)
 
-fun err_dup_prover name = error ("Duplicate prover: " ^ quote name);
+fun err_dup_prover name = error ("Duplicate prover: " ^ quote name ^ ".");
 
 structure Provers = Theory_Data
 (
@@ -314,8 +333,9 @@
 fun get_prover thy name =
   Option.map #1 (Symtab.lookup (Provers.get thy) name);
 
-fun available_atps thy = Pretty.writeln
-  (Pretty.strs ("ATPs:" :: sort_strings (Symtab.keys (Provers.get thy))));
+fun available_atps thy =
+  priority ("Available ATPs: " ^
+            commas (sort_strings (Symtab.keys (Provers.get thy))) ^ ".")
 
 
 (* start prover thread *)
@@ -323,7 +343,7 @@
 fun start_prover (params as {timeout, ...}) birth_time death_time i
                  relevance_override proof_state name =
   (case get_prover (Proof.theory_of proof_state) name of
-    NONE => warning ("Unknown ATP: " ^ quote name)
+    NONE => warning ("Unknown ATP: " ^ quote name ^ ".")
   | SOME prover =>
       let
         val {context = ctxt, facts, goal} = Proof.goal proof_state;
@@ -334,7 +354,7 @@
 
         val _ = Toplevel.thread true (fn () =>
           let
-            val _ = register birth_time death_time (Thread.self (), desc);
+            val _ = register params birth_time death_time (Thread.self (), desc)
             val problem =
               {subgoal = i, goal = (ctxt, (facts, goal)),
                relevance_override = relevance_override, axiom_clauses = NONE,
@@ -342,8 +362,8 @@
             val message = #message (prover params timeout problem)
               handle Sledgehammer_HOL_Clause.TRIVIAL =>
                   metis_line i n []
-                | ERROR msg => ("Error: " ^ msg);
-            val _ = unregister message (Thread.self ());
+                | ERROR msg => "Error: " ^ msg ^ ".\n";
+            val _ = unregister params message (Thread.self ());
           in () end);
       in () end);
 
--- a/src/HOL/Tools/ATP_Manager/atp_minimal.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/ATP_Manager/atp_minimal.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -9,79 +9,38 @@
   type params = ATP_Manager.params
   type prover = ATP_Manager.prover
   type prover_result = ATP_Manager.prover_result
-  type 'a minimize_fun = ('a list -> bool) -> 'a list -> 'a list
 
-  val linear_minimize : 'a minimize_fun
-  val binary_minimize : 'a minimize_fun
   val minimize_theorems :
-    params -> (string * thm list) minimize_fun -> prover -> string -> int
-    -> Proof.state -> (string * thm list) list
+    params -> prover -> string -> int -> Proof.state -> (string * thm list) list
     -> (string * thm list) list option * string
 end;
 
 structure ATP_Minimal : ATP_MINIMAL =
 struct
 
+open Sledgehammer_Util
 open Sledgehammer_Fact_Preprocessor
 open Sledgehammer_Proof_Reconstruct
 open ATP_Manager
 
-type 'a minimize_fun = ('a list -> bool) -> 'a list -> 'a list
-
 (* Linear minimization algorithm *)
 
-fun linear_minimize p s =
+fun linear_minimize test s =
   let
-    fun aux [] needed = needed
-      | aux (x :: xs) needed = aux xs (needed |> not (p (xs @ needed)) ? cons x)
-  in aux s [] end;
-
-(* Binary minimalization *)
-
-local
-  fun isplit (l, r) [] = (l, r)
-    | isplit (l, r) [h] = (h :: l, r)
-    | isplit (l, r) (h1 :: h2 :: t) = isplit (h1 :: l, h2 :: r) t
-in
-  fun split lst = isplit ([], []) lst
-end
-
-local
-  fun min _ _ [] = raise Empty
-    | min _ _ [s0] = [s0]
-    | min p sup s =
-      let
-        val (l0, r0) = split s
-      in
-        if p (sup @ l0) then
-          min p sup l0
-        else if p (sup @ r0) then
-          min p sup r0
-        else
-          let
-            val l = min p (sup @ r0) l0
-            val r = min p (sup @ l) r0
-          in l @ r end
-      end
-in
-  (* return a minimal subset v of s that satisfies p
-   @pre p(s) & ~p([]) & monotone(p)
-   @post v subset s & p(v) &
-         forall e in v. ~p(v \ e)
-   *)
-  fun binary_minimize p s =
-    case min p [] s of
-      [x] => if p [] then [] else [x]
-    | m => m
-end
+    fun aux [] p = p
+      | aux (x :: xs) (needed, result) =
+        case test (xs @ needed) of
+          SOME result => aux xs (needed, result)
+        | NONE => aux xs (x :: needed, result)
+  in aux s end
 
 
 (* failure check and producing answer *)
 
-datatype 'a prove_result = Success of 'a | Failure | Timeout | Error
+datatype outcome = Success | Failure | Timeout | Error
 
-val string_of_result =
-  fn Success _ => "Success"
+val string_of_outcome =
+  fn Success => "Success"
    | Failure => "Failure"
    | Timeout => "Timeout"
    | Error => "Error"
@@ -93,80 +52,90 @@
    ("# Cannot determine problem status within resource limit", Timeout),
    ("Error", Error)]
 
-fun produce_answer ({success, proof, internal_thm_names, filtered_clauses, ...}
-                    : prover_result) =
+fun outcome_of_result (result as {success, proof, ...} : prover_result) =
   if success then
-    (Success (Vector.foldr (op ::) [] internal_thm_names, filtered_clauses),
-     proof)
-  else
-    let
-      val failure = failure_strings |> get_first (fn (s, t) =>
-          if String.isSubstring s proof then SOME (t, proof) else NONE)
-    in
-      (case failure of
-        SOME res => res
-      | NONE => (Failure, proof))
-    end
-
+    Success
+  else case get_first (fn (s, outcome) =>
+                          if String.isSubstring s proof then SOME outcome
+                          else NONE) failure_strings of
+    SOME outcome => outcome
+  | NONE => Failure
 
 (* wrapper for calling external prover *)
 
 fun sledgehammer_test_theorems (params as {full_types, ...} : params) prover
-        timeout subgoal state filtered name_thms_pairs =
+        timeout subgoal state filtered_clauses name_thms_pairs =
   let
-    val _ = priority ("Testing " ^ string_of_int (length name_thms_pairs) ^
-                      " theorems... ")
+    val num_theorems = length name_thms_pairs
+    val _ = priority ("Testing " ^ string_of_int num_theorems ^
+                      " theorem" ^ plural_s num_theorems ^ "...")
     val name_thm_pairs = maps (fn (n, ths) => map (pair n) ths) name_thms_pairs
     val axclauses = cnf_rules_pairs (Proof.theory_of state) name_thm_pairs
-    val {context = ctxt, facts, goal} = Proof.goal state
+    val {context = ctxt, facts, goal} = Proof.raw_goal state
     val problem =
      {subgoal = subgoal, goal = (ctxt, (facts, goal)),
       relevance_override = {add = [], del = [], only = false},
-      axiom_clauses = SOME axclauses, filtered_clauses = filtered}
-    val (result, proof) = produce_answer (prover params timeout problem)
-    val _ = priority (string_of_result result)
-  in (result, proof) end
-
+      axiom_clauses = SOME axclauses,
+      filtered_clauses = SOME (the_default axclauses filtered_clauses)}
+  in
+    `outcome_of_result (prover params timeout problem)
+    |>> tap (priority o string_of_outcome)
+  end
 
 (* minimalization of thms *)
 
-fun minimize_theorems (params as {minimize_timeout, ...}) gen_min prover
-                      prover_name i state name_thms_pairs =
+fun minimize_theorems (params as {minimize_timeout, isar_proof, modulus,
+                                  sorts, ...})
+                      prover atp_name i state name_thms_pairs =
   let
     val msecs = Time.toMilliseconds minimize_timeout
+    val n = length name_thms_pairs
     val _ =
-      priority ("Minimize called with " ^ string_of_int (length name_thms_pairs) ^
-        " theorems, ATP: " ^ prover_name ^
-        ", time limit: " ^ string_of_int msecs ^ " ms")
+      priority ("Sledgehammer minimizer: ATP " ^ quote atp_name ^
+                " with a time limit of " ^ string_of_int msecs ^ " ms.")
     val test_thms_fun =
       sledgehammer_test_theorems params prover minimize_timeout i state
     fun test_thms filtered thms =
-      case test_thms_fun filtered thms of (Success _, _) => true | _ => false
-    val n = state |> Proof.raw_goal |> #goal |> prop_of |> Logic.count_prems
+      case test_thms_fun filtered thms of
+        (Success, result) => SOME result
+      | _ => NONE
+
+    val {context = ctxt, facts, goal} = Proof.goal state;
+    val n = Logic.count_prems (prop_of goal)
   in
     (* try prove first to check result and get used theorems *)
     (case test_thms_fun NONE name_thms_pairs of
-      (Success (used, filtered), _) =>
+      (Success, result as {internal_thm_names, filtered_clauses, ...}) =>
         let
-          val ordered_used = sort_distinct string_ord used
+          val used = internal_thm_names |> Vector.foldr (op ::) []
+                                        |> sort_distinct string_ord
           val to_use =
-            if length ordered_used < length name_thms_pairs then
-              filter (fn (name1, _) => List.exists (equal name1) ordered_used) name_thms_pairs
+            if length used < length name_thms_pairs then
+              filter (fn (name1, _) => List.exists (curry (op =) name1) used)
+                     name_thms_pairs
             else name_thms_pairs
-          val min_thms =
-            if null to_use then []
-            else gen_min (test_thms (SOME filtered)) to_use
-          val min_names = sort_distinct string_ord (map fst min_thms)
+          val (min_thms, {proof, internal_thm_names, ...}) =
+            linear_minimize (test_thms (SOME filtered_clauses)) to_use
+                            ([], result)
+          val n = length min_thms
           val _ = priority (cat_lines
-            ["Minimal " ^ string_of_int (length min_thms) ^ " theorems"])
-        in (SOME min_thms, metis_line i n min_names) end
+            ["Minimized: " ^ string_of_int n ^ " theorem" ^ plural_s n] ^ ".")
+        in
+          (SOME min_thms,
+           proof_text isar_proof true modulus sorts atp_name proof
+                      internal_thm_names ctxt goal i |> fst)
+        end
     | (Timeout, _) =>
-        (NONE, "Timeout: You may need to increase the time limit of " ^
-          string_of_int msecs ^ " ms. Invoke \"atp_minimize [time=...]\".")
-    | (Error, msg) =>
-        (NONE, "Error in prover: " ^ msg)
+        (NONE, "Timeout: You can increase the time limit using the \"timeout\" \
+               \option (e.g., \"timeout = " ^
+               string_of_int (10 + msecs div 1000) ^ " s\").")
+    | (Error, {message, ...}) => (NONE, "ATP error: " ^ message)
     | (Failure, _) =>
-        (NONE, "Failure: No proof with the theorems supplied"))
+        (* Failure sometimes mean timeout, unfortunately. *)
+        (NONE, "Failure: No proof was found with the current time limit. You \
+               \can increase the time limit using the \"timeout\" \
+               \option (e.g., \"timeout = " ^
+               string_of_int (10 + msecs div 1000) ^ " s\")."))
     handle Sledgehammer_HOL_Clause.TRIVIAL =>
         (SOME [], metis_line i n [])
       | ERROR msg => (NONE, "Error: " ^ msg)
--- a/src/HOL/Tools/ATP_Manager/atp_wrapper.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/ATP_Manager/atp_wrapper.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -20,6 +20,7 @@
 structure ATP_Wrapper : ATP_WRAPPER =
 struct
 
+open Sledgehammer_Util
 open Sledgehammer_Fact_Preprocessor
 open Sledgehammer_HOL_Clause
 open Sledgehammer_Fact_Filter
@@ -47,7 +48,7 @@
   arguments: Time.time -> string,
   failure_strs: string list,
   max_new_clauses: int,
-  prefers_theory_const: bool,
+  prefers_theory_relevant: bool,
   supports_isar_proofs: bool};
 
 
@@ -65,8 +66,8 @@
           else SOME "Ill-formed ATP output"
   | (failure :: _) => SOME failure
 
-fun generic_prover get_facts prepare write cmd args failure_strs produce_answer
-        name ({full_types, ...} : params)
+fun generic_prover overlord get_facts prepare write_file cmd args failure_strs
+        proof_text name ({debug, full_types, explicit_apply, ...} : params)
         ({subgoal, goal, relevance_override, axiom_clauses, filtered_clauses}
          : problem) =
   let
@@ -87,16 +88,20 @@
       prepare goal_cls chain_ths the_axiom_clauses the_filtered_clauses thy;
 
     (* path to unique problem file *)
-    val destdir' = Config.get ctxt destdir;
+    val destdir' = if overlord then getenv "ISABELLE_HOME_USER"
+                   else Config.get ctxt destdir;
     val problem_prefix' = Config.get ctxt problem_prefix;
     fun prob_pathname nr =
-      let val probfile =
-        Path.basic (problem_prefix' ^ serial_string () ^ "_" ^ string_of_int nr)
+      let
+        val probfile =
+          Path.basic (problem_prefix' ^
+                      (if overlord then "_" ^ name else serial_string ())
+                      ^ "_" ^ string_of_int nr)
       in
         if destdir' = "" then File.tmp_path probfile
         else if File.exists (Path.explode destdir')
         then Path.append  (Path.explode destdir') probfile
-        else error ("No such directory: " ^ destdir')
+        else error ("No such directory: " ^ destdir' ^ ".")
       end;
 
     (* write out problem file and call prover *)
@@ -123,28 +128,30 @@
       if Config.get ctxt measure_runtime then split_time s else (s, 0)
     fun run_on probfile =
       if File.exists cmd then
-        write full_types probfile clauses
+        write_file full_types explicit_apply probfile clauses
         |> pair (apfst split_time' (bash_output (cmd_line probfile)))
-      else error ("Bad executable: " ^ Path.implode cmd);
+      else error ("Bad executable: " ^ Path.implode cmd ^ ".");
 
-    (* if problemfile has not been exported, delete problemfile; otherwise export proof, too *)
+    (* If the problem file has not been exported, remove it; otherwise, export
+       the proof file too. *)
     fun cleanup probfile = if destdir' = "" then try File.rm probfile else NONE;
     fun export probfile (((proof, _), _), _) =
-      if destdir' = "" then ()
-      else File.write (Path.explode (Path.implode probfile ^ "_proof")) proof;
+      if destdir' = "" then
+        ()
+      else
+        File.write (Path.explode (Path.implode probfile ^ "_proof"))
+                   ("% " ^ timestamp () ^ "\n" ^ proof)
 
-    val (((proof, atp_run_time_in_msecs), rc), conj_pos) =
+    val (((proof, atp_run_time_in_msecs), rc), _) =
       with_path cleanup export run_on (prob_pathname subgoal);
 
-    (* check for success and print out some information on failure *)
+    (* Check for success and print out some information on failure. *)
     val failure = find_failure failure_strs proof;
     val success = rc = 0 andalso is_none failure;
     val (message, relevant_thm_names) =
-      if is_some failure then ("External prover failed.", [])
-      else if rc <> 0 then ("External prover failed: " ^ proof, [])
-      else
-        (produce_answer name (proof, internal_thm_names, conj_pos, ctxt, th,
-                              subgoal));
+      if is_some failure then ("ATP failed to find a proof.\n", [])
+      else if rc <> 0 then ("ATP error: " ^ proof ^ ".\n", [])
+      else proof_text name proof internal_thm_names ctxt th subgoal
   in
     {success = success, message = message,
      relevant_thm_names = relevant_thm_names,
@@ -158,27 +165,28 @@
 
 fun generic_tptp_prover
         (name, {command, arguments, failure_strs, max_new_clauses,
-                prefers_theory_const, supports_isar_proofs})
-        (params as {respect_no_atp, relevance_threshold, convergence,
-                    theory_const, higher_order, follow_defs, isar_proof,
+                prefers_theory_relevant, supports_isar_proofs})
+        (params as {overlord, respect_no_atp, relevance_threshold, convergence,
+                    theory_relevant, higher_order, follow_defs, isar_proof,
                     modulus, sorts, ...})
         timeout =
-  generic_prover
+  generic_prover overlord
       (get_relevant_facts respect_no_atp relevance_threshold convergence
                           higher_order follow_defs max_new_clauses
-                          (the_default prefers_theory_const theory_const))
-      (prepare_clauses higher_order false) write_tptp_file command
+                          (the_default prefers_theory_relevant theory_relevant))
+      (prepare_clauses higher_order false)
+      (write_tptp_file (overlord andalso not isar_proof)) command
       (arguments timeout) failure_strs
-      (if supports_isar_proofs andalso isar_proof then
-         structured_isar_proof modulus sorts
-       else
-         metis_lemma_list false) name params;
+      (proof_text (supports_isar_proofs andalso isar_proof) false modulus sorts)
+      name params
 
 fun tptp_prover name p = (name, generic_tptp_prover (name, p));
 
 
 (** common provers **)
 
+fun generous_to_secs time = (Time.toMilliseconds time + 999) div 1000
+
 (* Vampire *)
 
 (* NB: Vampire does not work without explicit time limit. *)
@@ -186,11 +194,11 @@
 val vampire_config : prover_config =
   {command = Path.explode "$VAMPIRE_HOME/vampire",
    arguments = (fn timeout => "--output_syntax tptp --mode casc -t " ^
-                              string_of_int (Time.toSeconds timeout)),
+                              string_of_int (generous_to_secs timeout)),
    failure_strs =
      ["Satisfiability detected", "Refutation not found", "CANNOT PROVE"],
    max_new_clauses = 60,
-   prefers_theory_const = false,
+   prefers_theory_relevant = false,
    supports_isar_proofs = true}
 val vampire = tptp_prover "vampire" vampire_config
 
@@ -201,13 +209,13 @@
   {command = Path.explode "$E_HOME/eproof",
    arguments = (fn timeout => "--tstp-in --tstp-out -l5 -xAutoDev \
                               \-tAutoDev --silent --cpu-limit=" ^
-                              string_of_int (Time.toSeconds timeout)),
+                              string_of_int (generous_to_secs timeout)),
    failure_strs =
        ["SZS status: Satisfiable", "SZS status Satisfiable",
         "SZS status: ResourceOut", "SZS status ResourceOut",
         "# Cannot determine problem status"],
    max_new_clauses = 100,
-   prefers_theory_const = false,
+   prefers_theory_relevant = false,
    supports_isar_proofs = true}
 val e = tptp_prover "e" e_config
 
@@ -216,29 +224,32 @@
 
 fun generic_dfg_prover
         (name, ({command, arguments, failure_strs, max_new_clauses,
-                 prefers_theory_const, ...} : prover_config))
-        (params as {respect_no_atp, relevance_threshold, convergence,
-                    theory_const, higher_order, follow_defs, ...})
+                 prefers_theory_relevant, ...} : prover_config))
+        (params as {overlord, respect_no_atp, relevance_threshold, convergence,
+                    theory_relevant, higher_order, follow_defs, ...})
         timeout =
-  generic_prover
+  generic_prover overlord
       (get_relevant_facts respect_no_atp relevance_threshold convergence
                           higher_order follow_defs max_new_clauses
-                          (the_default prefers_theory_const theory_const))
+                          (the_default prefers_theory_relevant theory_relevant))
       (prepare_clauses higher_order true) write_dfg_file command
-      (arguments timeout) failure_strs (metis_lemma_list true) name params;
+      (arguments timeout) failure_strs (metis_proof_text false false)
+      name params
 
 fun dfg_prover (name, p) = (name, generic_dfg_prover (name, p));
 
+(* The "-VarWeight=3" option helps the higher-order problems, probably by
+   counteracting the presence of "hAPP". *)
 val spass_config : prover_config =
  {command = Path.explode "$SPASS_HOME/SPASS",
   arguments = (fn timeout => "-Auto -SOS=1 -PGiven=0 -PProblem=0 -Splits=0" ^
-    " -FullRed=0 -DocProof -TimeLimit=" ^
-    string_of_int (Time.toSeconds timeout)),
+    " -FullRed=0 -DocProof -VarWeight=3 -TimeLimit=" ^
+    string_of_int (generous_to_secs timeout)),
   failure_strs =
     ["SPASS beiseite: Completion found.", "SPASS beiseite: Ran out of time.",
      "SPASS beiseite: Maximal number of loops exceeded."],
   max_new_clauses = 40,
-  prefers_theory_const = true,
+  prefers_theory_relevant = true,
   supports_isar_proofs = false}
 val spass = dfg_prover ("spass", spass_config)
 
@@ -272,15 +283,15 @@
 val remote_failure_strs = ["Remote-script could not extract proof"];
 
 fun remote_prover_config prover_prefix args
-        ({failure_strs, max_new_clauses, prefers_theory_const, ...}
+        ({failure_strs, max_new_clauses, prefers_theory_relevant, ...}
          : prover_config) : prover_config =
   {command = Path.explode "$ISABELLE_ATP_MANAGER/SystemOnTPTP",
    arguments = (fn timeout =>
-     args ^ " -t " ^ string_of_int (Time.toSeconds timeout) ^ " -s " ^
+     args ^ " -t " ^ string_of_int (generous_to_secs timeout) ^ " -s " ^
      the_system prover_prefix),
    failure_strs = remote_failure_strs @ failure_strs,
    max_new_clauses = max_new_clauses,
-   prefers_theory_const = prefers_theory_const,
+   prefers_theory_relevant = prefers_theory_relevant,
    supports_isar_proofs = false}
 
 val remote_vampire =
--- a/src/HOL/Tools/Datatype/datatype.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/Datatype/datatype.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -682,7 +682,7 @@
             (constrs @ [(c, map (dtyp_of_typ new_dts) cargs')],
               constr_syntax' @ [(cname, mx')], sorts'')
           end handle ERROR msg => cat_error msg
-           ("The error above occured in constructor " ^ quote (Binding.str_of cname) ^
+           ("The error above occurred in constructor " ^ quote (Binding.str_of cname) ^
             " of datatype " ^ quote (Binding.str_of tname));
 
         val (constrs', constr_syntax', sorts') =
--- a/src/HOL/Tools/Nitpick/nitpick.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/Nitpick/nitpick.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -225,13 +225,15 @@
         o Pretty.chunks o cons (Pretty.str "") o single
         o Pretty.mark Markup.hilite
       else
-        priority o Pretty.string_of
+        (fn s => (priority s; if debug then tracing s else ()))
+        o Pretty.string_of
     (* (unit -> Pretty.T) -> unit *)
     fun pprint_m f = () |> not auto ? pprint o f
     fun pprint_v f = () |> verbose ? pprint o f
     fun pprint_d f = () |> debug ? pprint o f
     (* string -> unit *)
     val print = pprint o curry Pretty.blk 0 o pstrs
+    val print_g = pprint o Pretty.str
     (* (unit -> string) -> unit *)
     val print_m = pprint_m o K o plazy
     val print_v = pprint_v o K o plazy
@@ -265,8 +267,8 @@
     val original_max_potential = max_potential
     val original_max_genuine = max_genuine
 (*
-    val _ = priority ("*** " ^ Syntax.string_of_term ctxt orig_t)
-    val _ = List.app (fn t => priority ("*** " ^ Syntax.string_of_term ctxt t))
+    val _ = print_g ("*** " ^ Syntax.string_of_term ctxt orig_t)
+    val _ = List.app (fn t => print_g ("*** " ^ Syntax.string_of_term ctxt t))
                      orig_assm_ts
 *)
     val max_bisim_depth = fold Integer.max bisim_depths ~1
@@ -338,7 +340,7 @@
                           | _ => false) finitizes)
     val standard = forall snd stds
 (*
-    val _ = List.app (priority o string_for_nut ctxt) (nondef_us @ def_us)
+    val _ = List.app (print_g o string_for_nut ctxt) (nondef_us @ def_us)
 *)
 
     val unique_scope = forall (curry (op =) 1 o length o snd) cards_assigns
@@ -431,10 +433,10 @@
             else
               ()
 (*
-    val _ = priority "Monotonic types:"
-    val _ = List.app (priority o string_for_type ctxt) mono_Ts
-    val _ = priority "Nonmonotonic types:"
-    val _ = List.app (priority o string_for_type ctxt) nonmono_Ts
+    val _ = print_g "Monotonic types:"
+    val _ = List.app (print_g o string_for_type ctxt) mono_Ts
+    val _ = print_g "Nonmonotonic types:"
+    val _ = List.app (print_g o string_for_type ctxt) nonmono_Ts
 *)
 
     val need_incremental = Int.max (max_potential, max_genuine) >= 2
@@ -472,10 +474,10 @@
                 raise TOO_LARGE ("Nitpick.pick_them_nits_in_term.\
                                  \problem_for_scope", "too large scope")
 (*
-        val _ = priority "Offsets:"
+        val _ = print_g "Offsets:"
         val _ = List.app (fn (T, j0) =>
-                             priority (string_for_type ctxt T ^ " = " ^
-                                       string_of_int j0))
+                             print_g (string_for_type ctxt T ^ " = " ^
+                                    string_of_int j0))
                          (Typtab.dest ofs)
 *)
         val all_exact = forall (is_exact_type datatypes true) all_Ts
@@ -504,7 +506,7 @@
         val nondef_us = map (choose_reps_in_nut scope unsound rep_table false)
                             nondef_us
 (*
-        val _ = List.app (priority o string_for_nut ctxt)
+        val _ = List.app (print_g o string_for_nut ctxt)
                          (free_names @ sel_names @ nonsel_names @
                           nondef_us @ def_us)
 *)
@@ -658,9 +660,9 @@
               ();
             if not standard andalso likely_in_struct_induct_step then
               print "The existence of a nonstandard model suggests that the \
-                    \induction hypothesis is not general enough or perhaps \
-                    \even wrong. See the \"Inductive Properties\" section of \
-                    \the Nitpick manual for details (\"isabelle doc nitpick\")."
+                    \induction hypothesis is not general enough or may even be \
+                    \wrong. See the Nitpick manual's \"Inductive Properties\" \
+                    \section for details (\"isabelle doc nitpick\")."
             else
               ();
             if has_syntactic_sorts orig_t then
--- a/src/HOL/Tools/Nitpick/nitpick_hol.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/Nitpick/nitpick_hol.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -1265,9 +1265,8 @@
     boring <> is_funky_typedef_name thy s andalso is_typedef thy s
   | is_typedef_axiom _ _ _ = false
 (* term -> bool *)
-fun is_class_axiom t =
-  (t |> Logic.strip_horn |> swap |> op :: |> map Logic.dest_of_class; true)
-  handle TERM _ => false
+val is_class_axiom =
+  Logic.strip_horn #> swap #> op :: #> forall (can Logic.dest_of_class)
 
 (* Distinguishes between (1) constant definition axioms, (2) type arity and
    typedef axioms, and (3) other axioms, and returns the pair ((1), (3)).
--- a/src/HOL/Tools/Nitpick/nitpick_kodkod.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/Nitpick/nitpick_kodkod.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -1431,16 +1431,22 @@
            else if is_Cst Unrep u2 then to_compare_with_unrep u1 true_atom
            else kk_nat_less (to_integer u1) (to_integer u2)
          | @{typ int} => kk_int_less (to_integer u1) (to_integer u2)
-         | _ => double_rel_rel_let kk
-                    (fn r1 => fn r2 =>
-                        kk_rel_if
-                            (fold kk_and (map_filter (fn (u, r) =>
-                                 if is_opt_rep (rep_of u) then SOME (kk_some r)
-                                 else NONE) [(u1, r1), (u2, r2)]) KK.True)
-                            (atom_from_formula kk bool_j0 (KK.LT (pairself
-                                (int_expr_from_atom kk (type_of u1)) (r1, r2))))
-                            KK.None)
-                    (to_r u1) (to_r u2))
+         | _ =>
+           let
+             val R1 = Opt (Atom (card_of_rep (rep_of u1),
+                                 offset_of_type ofs (type_of u1)))
+           in
+             double_rel_rel_let kk
+                 (fn r1 => fn r2 =>
+                     kk_rel_if
+                         (fold kk_and (map_filter (fn (u, r) =>
+                              if is_opt_rep (rep_of u) then SOME (kk_some r)
+                              else NONE) [(u1, r1), (u2, r2)]) KK.True)
+                         (atom_from_formula kk bool_j0 (KK.LT (pairself
+                             (int_expr_from_atom kk (type_of u1)) (r1, r2))))
+                         KK.None)
+                 (to_rep R1 u1) (to_rep R1 u2)
+            end)
       | Op2 (The, _, R, u1, u2) =>
         if is_opt_rep R then
           let val r1 = to_opt (Func (unopt_rep R, bool_atom_R)) u1 in
@@ -1854,40 +1860,42 @@
                                        oper (KK.IntReg 0) (KK.IntReg 1))]))))
       end
     (* rep -> rep -> KK.rel_expr -> nut -> KK.rel_expr *)
-    and to_apply res_R func_u arg_u =
-      case unopt_rep (rep_of func_u) of
-        Unit =>
-        let val j0 = offset_of_type ofs (type_of func_u) in
+    and to_apply (R as Formula _) func_u arg_u =
+        raise REP ("Nitpick_Kodkod.to_apply", [R])
+      | to_apply res_R func_u arg_u =
+        case unopt_rep (rep_of func_u) of
+          Unit =>
+          let val j0 = offset_of_type ofs (type_of func_u) in
+            to_guard [arg_u] res_R
+                     (rel_expr_from_rel_expr kk res_R (Atom (1, j0)) (KK.Atom j0))
+          end
+        | Atom (1, j0) =>
           to_guard [arg_u] res_R
-                   (rel_expr_from_rel_expr kk res_R (Atom (1, j0)) (KK.Atom j0))
-        end
-      | Atom (1, j0) =>
-        to_guard [arg_u] res_R
-                 (rel_expr_from_rel_expr kk res_R (Atom (1, j0)) (to_r func_u))
-      | Atom (k, _) =>
-        let
-          val dom_card = card_of_rep (rep_of arg_u)
-          val ran_R = Atom (exact_root dom_card k,
-                            offset_of_type ofs (range_type (type_of func_u)))
-        in
-          to_apply_vect dom_card ran_R res_R (to_vect dom_card ran_R func_u)
-                        arg_u
-        end
-      | Vect (1, R') =>
-        to_guard [arg_u] res_R
-                 (rel_expr_from_rel_expr kk res_R R' (to_r func_u))
-      | Vect (k, R') => to_apply_vect k R' res_R (to_r func_u) arg_u
-      | Func (R, Formula Neut) =>
-        to_guard [arg_u] res_R (rel_expr_from_formula kk res_R
-                                    (kk_subset (to_opt R arg_u) (to_r func_u)))
-      | Func (Unit, R2) =>
-        to_guard [arg_u] res_R
-                 (rel_expr_from_rel_expr kk res_R R2 (to_r func_u))
-      | Func (R1, R2) =>
-        rel_expr_from_rel_expr kk res_R R2
-            (kk_n_fold_join kk true R1 R2 (to_opt R1 arg_u) (to_r func_u))
-        |> body_rep R2 = Formula Neut ? to_guard [arg_u] res_R
-      | _ => raise NUT ("Nitpick_Kodkod.to_apply", [func_u])
+                   (rel_expr_from_rel_expr kk res_R (Atom (1, j0)) (to_r func_u))
+        | Atom (k, _) =>
+          let
+            val dom_card = card_of_rep (rep_of arg_u)
+            val ran_R = Atom (exact_root dom_card k,
+                              offset_of_type ofs (range_type (type_of func_u)))
+          in
+            to_apply_vect dom_card ran_R res_R (to_vect dom_card ran_R func_u)
+                          arg_u
+          end
+        | Vect (1, R') =>
+          to_guard [arg_u] res_R
+                   (rel_expr_from_rel_expr kk res_R R' (to_r func_u))
+        | Vect (k, R') => to_apply_vect k R' res_R (to_r func_u) arg_u
+        | Func (R, Formula Neut) =>
+          to_guard [arg_u] res_R (rel_expr_from_formula kk res_R
+                                      (kk_subset (to_opt R arg_u) (to_r func_u)))
+        | Func (Unit, R2) =>
+          to_guard [arg_u] res_R
+                   (rel_expr_from_rel_expr kk res_R R2 (to_r func_u))
+        | Func (R1, R2) =>
+          rel_expr_from_rel_expr kk res_R R2
+              (kk_n_fold_join kk true R1 R2 (to_opt R1 arg_u) (to_r func_u))
+          |> body_rep R2 = Formula Neut ? to_guard [arg_u] res_R
+        | _ => raise NUT ("Nitpick_Kodkod.to_apply", [func_u])
     (* int -> rep -> rep -> KK.rel_expr -> nut *)
     and to_apply_vect k R' res_R func_r arg_u =
       let
--- a/src/HOL/Tools/Nitpick/nitpick_nut.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/Nitpick/nitpick_nut.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -1130,8 +1130,8 @@
               if is_boolean_type T then
                 bool_rep polar opt
               else
-                smart_range_rep ofs T1 (fn () => card_of_type card_assigns T)
-                                (unopt_rep R1)
+                lazy_range_rep ofs T1 (fn () => card_of_type card_assigns T)
+                               (unopt_rep R1)
                 |> opt ? opt_rep ofs T
           in s_op2 Apply T ran_R u1 u2 end
         | Op2 (Lambda, T, _, u1, u2) =>
--- a/src/HOL/Tools/Nitpick/nitpick_rep.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/Nitpick/nitpick_rep.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -34,7 +34,7 @@
   val is_one_rep : rep -> bool
   val is_lone_rep : rep -> bool
   val dest_Func : rep -> rep * rep
-  val smart_range_rep : int Typtab.table -> typ -> (unit -> int) -> rep -> rep
+  val lazy_range_rep : int Typtab.table -> typ -> (unit -> int) -> rep -> rep
   val binder_reps : rep -> rep list
   val body_rep : rep -> rep
   val one_rep : int Typtab.table -> typ -> rep -> rep
@@ -153,16 +153,16 @@
 fun dest_Func (Func z) = z
   | dest_Func R = raise REP ("Nitpick_Rep.dest_Func", [R])
 (* int Typtab.table -> typ -> (unit -> int) -> rep -> rep *)
-fun smart_range_rep _ _ _ Unit = Unit
-  | smart_range_rep _ _ _ (Vect (_, R)) = R
-  | smart_range_rep _ _ _ (Func (_, R2)) = R2
-  | smart_range_rep ofs T ran_card (Opt R) =
-    Opt (smart_range_rep ofs T ran_card R)
-  | smart_range_rep ofs (Type (@{type_name fun}, [_, T2])) _ (Atom (1, _)) =
+fun lazy_range_rep _ _ _ Unit = Unit
+  | lazy_range_rep _ _ _ (Vect (_, R)) = R
+  | lazy_range_rep _ _ _ (Func (_, R2)) = R2
+  | lazy_range_rep ofs T ran_card (Opt R) =
+    Opt (lazy_range_rep ofs T ran_card R)
+  | lazy_range_rep ofs (Type (@{type_name fun}, [_, T2])) _ (Atom (1, _)) =
     Atom (1, offset_of_type ofs T2)
-  | smart_range_rep ofs (Type (@{type_name fun}, [_, T2])) ran_card (Atom _) =
+  | lazy_range_rep ofs (Type (@{type_name fun}, [_, T2])) ran_card (Atom _) =
     Atom (ran_card (), offset_of_type ofs T2)
-  | smart_range_rep _ _ _ R = raise REP ("Nitpick_Rep.smart_range_rep", [R])
+  | lazy_range_rep _ _ _ R = raise REP ("Nitpick_Rep.lazy_range_rep", [R])
 
 (* rep -> rep list *)
 fun binder_reps (Func (R1, R2)) = R1 :: binder_reps R2
--- a/src/HOL/Tools/Quotient/quotient_def.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/Quotient/quotient_def.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -64,7 +64,7 @@
   val absrep_trm = absrep_fun AbsF lthy (fastype_of rhs, lhs_ty) $ rhs
   val prop = Logic.mk_equals (lhs, Syntax.check_term lthy absrep_trm)
   val (_, prop') = Local_Defs.cert_def lthy prop
-  val (_, newrhs) = Primitive_Defs.abs_def prop'
+  val (_, newrhs) = Local_Defs.abs_def prop'
 
   val ((trm, (_ , thm)), lthy') = Local_Theory.define ((qconst_bname, mx), (attr, newrhs)) lthy
 
--- a/src/HOL/Tools/Quotient/quotient_tacs.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/Quotient/quotient_tacs.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -653,10 +653,13 @@
 
 fun lifted qtys ctxt thm =
 let
-  val ((_, [thm']), ctxt') = Variable.import false [thm] ctxt
-  val goal = (quotient_lift_all qtys ctxt' o prop_of) thm'
+  (* When the theorem is atomized, eta redexes are contracted,
+     so we do it both in the original theorem *)
+  val thm' = Drule.eta_contraction_rule thm
+  val ((_, [thm'']), ctxt') = Variable.import false [thm'] ctxt
+  val goal = (quotient_lift_all qtys ctxt' o prop_of) thm''
 in
-  Goal.prove ctxt' [] [] goal (K (lift_tac ctxt' [thm] 1))
+  Goal.prove ctxt' [] [] goal (K (lift_tac ctxt' [thm'] 1))
   |> singleton (ProofContext.export ctxt' ctxt)
 end;
 
--- a/src/HOL/Tools/Sledgehammer/metis_tactics.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/Sledgehammer/metis_tactics.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -69,25 +69,25 @@
 
 fun metis_lit b c args = (b, (c, args));
 
-fun hol_type_to_fol (AtomV x) = Metis.Term.Var x
-  | hol_type_to_fol (AtomF x) = Metis.Term.Fn (x, [])
-  | hol_type_to_fol (Comp (tc, tps)) =
-    Metis.Term.Fn (tc, map hol_type_to_fol tps);
+fun hol_type_to_fol (TyVar (x, _)) = Metis.Term.Var x
+  | hol_type_to_fol (TyFree (s, _)) = Metis.Term.Fn (s, [])
+  | hol_type_to_fol (TyConstr ((s, _), tps)) =
+    Metis.Term.Fn (s, map hol_type_to_fol tps);
 
 (*These two functions insert type literals before the real literals. That is the
   opposite order from TPTP linkup, but maybe OK.*)
 
 fun hol_term_to_fol_FO tm =
   case strip_combterm_comb tm of
-      (CombConst(c,_,tys), tms) =>
+      (CombConst ((c, _), _, tys), tms) =>
         let val tyargs = map hol_type_to_fol tys
             val args   = map hol_term_to_fol_FO tms
         in Metis.Term.Fn (c, tyargs @ args) end
-    | (CombVar(v,_), []) => Metis.Term.Var v
+    | (CombVar ((v, _), _), []) => Metis.Term.Var v
     | _ => error "hol_term_to_fol_FO";
 
-fun hol_term_to_fol_HO (CombVar (a, _)) = Metis.Term.Var a
-  | hol_term_to_fol_HO (CombConst (a, _, tylist)) =
+fun hol_term_to_fol_HO (CombVar ((s, _), _)) = Metis.Term.Var s
+  | hol_term_to_fol_HO (CombConst ((a, _), _, tylist)) =
       Metis.Term.Fn (fn_isa_to_met a, map hol_type_to_fol tylist)
   | hol_term_to_fol_HO (CombApp (tm1, tm2)) =
        Metis.Term.Fn (".", map hol_term_to_fol_HO [tm1, tm2]);
@@ -95,26 +95,26 @@
 (*The fully-typed translation, to avoid type errors*)
 fun wrap_type (tm, ty) = Metis.Term.Fn("ti", [tm, hol_type_to_fol ty]);
 
-fun hol_term_to_fol_FT (CombVar(a, ty)) = wrap_type (Metis.Term.Var a, ty)
-  | hol_term_to_fol_FT (CombConst(a, ty, _)) =
+fun hol_term_to_fol_FT (CombVar ((s, _), ty)) = wrap_type (Metis.Term.Var s, ty)
+  | hol_term_to_fol_FT (CombConst((a, _), ty, _)) =
       wrap_type (Metis.Term.Fn(fn_isa_to_met a, []), ty)
   | hol_term_to_fol_FT (tm as CombApp(tm1,tm2)) =
        wrap_type (Metis.Term.Fn(".", map hol_term_to_fol_FT [tm1,tm2]),
                   type_of_combterm tm);
 
 fun hol_literal_to_fol FO (Literal (pol, tm)) =
-      let val (CombConst(p,_,tys), tms) = strip_combterm_comb tm
+      let val (CombConst((p, _), _, tys), tms) = strip_combterm_comb tm
           val tylits = if p = "equal" then [] else map hol_type_to_fol tys
           val lits = map hol_term_to_fol_FO tms
       in metis_lit pol (fn_isa_to_met p) (tylits @ lits) end
   | hol_literal_to_fol HO (Literal (pol, tm)) =
      (case strip_combterm_comb tm of
-          (CombConst("equal",_,_), tms) =>
+          (CombConst(("equal", _), _, _), tms) =>
             metis_lit pol "=" (map hol_term_to_fol_HO tms)
         | _ => metis_lit pol "{}" [hol_term_to_fol_HO tm])   (*hBOOL*)
   | hol_literal_to_fol FT (Literal (pol, tm)) =
      (case strip_combterm_comb tm of
-          (CombConst("equal",_,_), tms) =>
+          (CombConst(("equal", _), _, _), tms) =>
             metis_lit pol "=" (map hol_term_to_fol_FT tms)
         | _ => metis_lit pol "{}" [hol_term_to_fol_FT tm])   (*hBOOL*);
 
@@ -693,13 +693,19 @@
                 val unused = th_cls_pairs |> map_filter (fn (name, cls) =>
                   if common_thm used cls then NONE else SOME name)
             in
-                if null unused then ()
-                else warning ("Metis: unused theorems " ^ commas_quote unused);
+                if not (common_thm used cls) then
+                  warning "Metis: The goal is provable because the context is \
+                          \inconsistent."
+                else if not (null unused) then
+                  warning ("Metis: Unused theorems: " ^ commas_quote unused
+                           ^ ".")
+                else
+                  ();
                 case result of
                     (_,ith)::_ =>
-                        (trace_msg (fn () => "success: " ^ Display.string_of_thm ctxt ith);
+                        (trace_msg (fn () => "Success: " ^ Display.string_of_thm ctxt ith);
                          [ith])
-                  | _ => (trace_msg (fn () => "Metis: no result");
+                  | _ => (trace_msg (fn () => "Metis: No result");
                           [])
             end
         | Metis.Resolution.Satisfiable _ =>
--- a/src/HOL/Tools/Sledgehammer/sledgehammer_fact_filter.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/Sledgehammer/sledgehammer_fact_filter.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -127,8 +127,8 @@
 
 (*Inserts a dummy "constant" referring to the theory name, so that relevance
   takes the given theory into account.*)
-fun const_prop_of theory_const th =
- if theory_const then
+fun const_prop_of theory_relevant th =
+ if theory_relevant then
   let val name = Context.theory_name (theory_of_thm th)
       val t = Const (name ^ ". 1", HOLogic.boolT)
   in  t $ prop_of th  end
@@ -157,7 +157,7 @@
 
 structure CTtab = Table(type key = const_typ list val ord = dict_ord const_typ_ord);
 
-fun count_axiom_consts theory_const thy ((thm,_), tab) = 
+fun count_axiom_consts theory_relevant thy ((thm,_), tab) = 
   let fun count_const (a, T, tab) =
         let val (c, cts) = const_with_typ thy (a,T)
         in  (*Two-dimensional table update. Constant maps to types maps to count.*)
@@ -170,16 +170,15 @@
             count_term_consts (t, count_term_consts (u, tab))
         | count_term_consts (Abs(_,_,t), tab) = count_term_consts (t, tab)
         | count_term_consts (_, tab) = tab
-  in  count_term_consts (const_prop_of theory_const thm, tab)  end;
+  in  count_term_consts (const_prop_of theory_relevant thm, tab)  end;
 
 
 (**** Actual Filtering Code ****)
 
 (*The frequency of a constant is the sum of those of all instances of its type.*)
 fun const_frequency ctab (c, cts) =
-  let val pairs = CTtab.dest (the (Symtab.lookup ctab c))
-      fun add ((cts',m), n) = if match_types cts cts' then m+n else n
-  in  List.foldl add 0 pairs  end;
+  CTtab.fold (fn (cts', m) => match_types cts cts' ? Integer.add m)
+             (the (Symtab.lookup ctab c)) 0
 
 (*Add in a constant's weight, as determined by its frequency.*)
 fun add_ct_weight ctab ((c,T), w) =
@@ -202,8 +201,8 @@
   let val tab = add_term_consts_typs_rm thy (t, null_const_tab)
   in  Symtab.fold add_expand_pairs tab []  end;
 
-fun pair_consts_typs_axiom theory_const thy (p as (thm, _)) =
-  (p, (consts_typs_of_term thy (const_prop_of theory_const thm)));
+fun pair_consts_typs_axiom theory_relevant thy (p as (thm, _)) =
+  (p, (consts_typs_of_term thy (const_prop_of theory_relevant thm)));
 
 exception ConstFree;
 fun dest_ConstFree (Const aT) = aT
@@ -253,46 +252,54 @@
   end;
 
 fun relevant_clauses ctxt convergence follow_defs max_new
-                     (relevance_override as {add, del, only}) thy ctab p
-                     rel_consts =
-  let val add_thms = maps (ProofContext.get_fact ctxt) add
-      val del_thms = maps (ProofContext.get_fact ctxt) del
-      fun relevant ([],_) [] = [] : (thm * (string * int)) list  (*Nothing added this iteration*)
-        | relevant (newpairs,rejects) [] =
-            let val (newrels,more_rejects) = take_best max_new newpairs
-                val new_consts = maps #2 newrels
-                val rel_consts' = List.foldl add_const_typ_table rel_consts new_consts
-                val newp = p + (1.0-p) / convergence
+                     (relevance_override as {add, del, only}) thy ctab =
+  let
+    val thms_for_facts =
+      maps (maps (cnf_axiom thy) o ProofContext.get_fact ctxt)
+    val add_thms = thms_for_facts add
+    val del_thms = thms_for_facts del
+    fun iter p rel_consts =
+      let
+        fun relevant ([], _) [] = []  (* Nothing added this iteration *)
+          | relevant (newpairs,rejects) [] =
+            let
+              val (newrels, more_rejects) = take_best max_new newpairs
+              val new_consts = maps #2 newrels
+              val rel_consts' = List.foldl add_const_typ_table rel_consts new_consts
+              val newp = p + (1.0-p) / convergence
             in
-              trace_msg (fn () => "relevant this iteration: " ^ Int.toString (length newrels));
-                map #1 newrels @ 
-                relevant_clauses ctxt convergence follow_defs max_new
-                                 relevance_override thy ctab newp rel_consts'
-                                 (more_rejects @ rejects)
+              trace_msg (fn () => "relevant this iteration: " ^
+                                  Int.toString (length newrels));
+              map #1 newrels @ iter newp rel_consts' (more_rejects @ rejects)
             end
-        | relevant (newrels, rejects)
-                   ((ax as (clsthm as (thm, (name, n)), consts_typs)) :: axs) =
+          | relevant (newrels, rejects)
+                     ((ax as (clsthm as (thm, (name, n)), consts_typs)) :: axs) =
             let
               val weight = if member Thm.eq_thm del_thms thm then 0.0
                            else if member Thm.eq_thm add_thms thm then 1.0
                            else if only then 0.0
                            else clause_weight ctab rel_consts consts_typs
             in
-              if p <= weight orelse (follow_defs andalso defines thy (#1 clsthm) rel_consts)
-              then (trace_msg (fn () => (name ^ " clause " ^ Int.toString n ^ 
-                                            " passes: " ^ Real.toString weight));
-                    relevant ((ax,weight)::newrels, rejects) axs)
-              else relevant (newrels, ax::rejects) axs
+              if p <= weight orelse
+                 (follow_defs andalso defines thy (#1 clsthm) rel_consts) then
+                (trace_msg (fn () => name ^ " clause " ^ Int.toString n ^ 
+                                     " passes: " ^ Real.toString weight);
+                relevant ((ax, weight) :: newrels, rejects) axs)
+              else
+                relevant (newrels, ax :: rejects) axs
             end
-    in  trace_msg (fn () => ("relevant_clauses, current pass mark = " ^ Real.toString p));
-        relevant ([],[])
-    end;
+        in
+          trace_msg (fn () => "relevant_clauses, current pass mark: " ^
+                              Real.toString p);
+          relevant ([], [])
+        end
+  in iter end
         
 fun relevance_filter ctxt relevance_threshold convergence follow_defs max_new
-                     theory_const relevance_override thy axioms goals = 
+                     theory_relevant relevance_override thy axioms goals = 
   if relevance_threshold > 0.0 then
     let
-      val const_tab = List.foldl (count_axiom_consts theory_const thy)
+      val const_tab = List.foldl (count_axiom_consts theory_relevant thy)
                                  Symtab.empty axioms
       val goal_const_tab = get_goal_consts_typs thy goals
       val _ =
@@ -301,7 +308,8 @@
       val relevant =
         relevant_clauses ctxt convergence follow_defs max_new relevance_override
                          thy const_tab relevance_threshold goal_const_tab
-                         (map (pair_consts_typs_axiom theory_const thy) axioms)
+                         (map (pair_consts_typs_axiom theory_relevant thy)
+                              axioms)
     in
       trace_msg (fn () => "Total relevant: " ^ Int.toString (length relevant));
       relevant
@@ -360,11 +368,11 @@
           val ths = filter_out bad_for_atp ths0;
         in
           if Facts.is_concealed facts name orelse null ths orelse
-            respect_no_atp andalso is_package_def name then I
-          else
-            (case find_first check_thms [name1, name2, name] of
-              NONE => I
-            | SOME a => cons (a, ths))
+             (respect_no_atp andalso is_package_def name) then
+            I
+          else case find_first check_thms [name1, name2, name] of
+            NONE => I
+          | SOME a => cons (a, ths)
         end);
   in valid_facts global_facts @ valid_facts local_facts end;
 
@@ -497,20 +505,24 @@
   | SOME b => not b
 
 fun get_relevant_facts respect_no_atp relevance_threshold convergence
-                       higher_order follow_defs max_new theory_const
-                       relevance_override (ctxt, (chain_ths, th)) goal_cls =
-  let
-    val thy = ProofContext.theory_of ctxt
-    val is_FO = is_first_order thy higher_order goal_cls
-    val included_cls = get_all_lemmas respect_no_atp ctxt
-      |> cnf_rules_pairs thy |> make_unique
-      |> restrict_to_logic thy is_FO
-      |> remove_unwanted_clauses
-  in
-    relevance_filter ctxt relevance_threshold convergence follow_defs max_new
-                     theory_const relevance_override thy included_cls
-                     (map prop_of goal_cls)
-  end;
+                       higher_order follow_defs max_new theory_relevant
+                       (relevance_override as {add, only, ...})
+                       (ctxt, (chain_ths, th)) goal_cls =
+  if (only andalso null add) orelse relevance_threshold > 1.0 then
+    []
+  else
+    let
+      val thy = ProofContext.theory_of ctxt
+      val is_FO = is_first_order thy higher_order goal_cls
+      val included_cls = get_all_lemmas respect_no_atp ctxt
+        |> cnf_rules_pairs thy |> make_unique
+        |> restrict_to_logic thy is_FO
+        |> remove_unwanted_clauses
+    in
+      relevance_filter ctxt relevance_threshold convergence follow_defs max_new
+                       theory_relevant relevance_override thy included_cls
+                       (map prop_of goal_cls)
+    end
 
 (* prepare for passing to writer,
    create additional clauses based on the information from extra_cls *)
@@ -518,7 +530,8 @@
   let
     (* add chain thms *)
     val chain_cls =
-      cnf_rules_pairs thy (filter check_named (map pairname chain_ths))
+      cnf_rules_pairs thy (filter check_named
+                                  (map (`Thm.get_name_hint) chain_ths))
     val axcls = chain_cls @ axcls
     val extra_cls = chain_cls @ extra_cls
     val is_FO = is_first_order thy higher_order goal_cls
--- a/src/HOL/Tools/Sledgehammer/sledgehammer_fact_preprocessor.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/Sledgehammer/sledgehammer_fact_preprocessor.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -10,7 +10,6 @@
   val trace_msg: (unit -> string) -> unit
   val skolem_prefix: string
   val cnf_axiom: theory -> thm -> thm list
-  val pairname: thm -> string * thm
   val multi_base_blacklist: string list
   val bad_for_atp: thm -> bool
   val type_has_topsort: typ -> bool
@@ -91,7 +90,7 @@
             val (c, thy') =
               Sign.declare_const ((Binding.conceal (Binding.name cname), cT), NoSyn) thy
             val cdef = cname ^ "_def"
-            val (ax, thy'') =
+            val ((_, ax), thy'') =
               Thm.add_def true false (Binding.name cdef, Logic.mk_equals (c, rhs)) thy'
             val ax' = Drule.export_without_context ax
           in dec_sko (subst_bound (list_comb (c, args), p)) (ax' :: axs, thy'') end
@@ -370,7 +369,7 @@
 val update_cache = ThmCache.map o apfst o Thmtab.update;
 fun mark_seen name = ThmCache.map (apsnd (Symtab.update (name, ())));
 
-(*Exported function to convert Isabelle theorems into axiom clauses*)
+(* Convert Isabelle theorems into axiom clauses. *)
 fun cnf_axiom thy th0 =
   let val th = Thm.transfer thy th0 in
     case lookup_cache thy th of
@@ -379,11 +378,6 @@
   end;
 
 
-(**** Rules from the context ****)
-
-fun pairname th = (Thm.get_name_hint th, th);
-
-
 (**** Translate a set of theorems into CNF ****)
 
 fun pair_name_cls k (n, []) = []
--- a/src/HOL/Tools/Sledgehammer/sledgehammer_fol_clause.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/Sledgehammer/sledgehammer_fol_clause.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -4,7 +4,7 @@
 Storing/printing FOL clauses and arity clauses.  Typed equality is
 treated differently.
 
-FIXME: combine with res_hol_clause!
+FIXME: combine with sledgehammer_hol_clause!
 *)
 
 signature SLEDGEHAMMER_FOL_CLAUSE =
@@ -30,13 +30,19 @@
   val make_fixed_const : bool -> string -> string
   val make_fixed_type_const : bool -> string -> string
   val make_type_class : string -> string
+  type name = string * string
+  type name_pool = string Symtab.table * string Symtab.table
+  val empty_name_pool : bool -> name_pool option
+  val pool_map : ('a -> 'b -> 'c * 'b) -> 'a list -> 'b -> 'c list * 'b
+  val nice_name : name -> name_pool option -> string * name_pool option
   datatype kind = Axiom | Conjecture
   type axiom_name = string
   datatype fol_type =
-      AtomV of string
-    | AtomF of string
-    | Comp of string * fol_type list
-  val string_of_fol_type : fol_type -> string
+    TyVar of name |
+    TyFree of name |
+    TyConstr of name * fol_type list
+  val string_of_fol_type :
+    fol_type -> name_pool option -> string * name_pool option
   datatype type_literal = LTVar of string * string | LTFree of string * string
   exception CLAUSE of string * term
   val add_typs : typ list -> type_literal list
@@ -51,12 +57,14 @@
   val make_classrel_clauses: theory -> class list -> class list -> classrel_clause list
   val make_arity_clauses_dfg: bool -> theory -> string list -> class list -> class list * arity_clause list
   val make_arity_clauses: theory -> string list -> class list -> class list * arity_clause list
-  val add_type_sort_preds: typ * int Symtab.table -> int Symtab.table
-  val add_classrel_clause_preds : classrel_clause * int Symtab.table -> int Symtab.table
+  val add_type_sort_preds: typ -> int Symtab.table -> int Symtab.table
+  val add_classrel_clause_preds :
+    classrel_clause -> int Symtab.table -> int Symtab.table
   val class_of_arityLit: arLit -> class
-  val add_arity_clause_preds: arity_clause * int Symtab.table -> int Symtab.table
-  val add_foltype_funcs: fol_type * int Symtab.table -> int Symtab.table
-  val add_arity_clause_funcs: arity_clause * int Symtab.table -> int Symtab.table
+  val add_arity_clause_preds: arity_clause -> int Symtab.table -> int Symtab.table
+  val add_fol_type_funcs: fol_type -> int Symtab.table -> int Symtab.table
+  val add_arity_clause_funcs:
+    arity_clause -> int Symtab.table -> int Symtab.table
   val init_functab: int Symtab.table
   val dfg_sign: bool -> string -> string
   val dfg_of_typeLit: bool -> type_literal -> string
@@ -96,7 +104,7 @@
 val tconst_prefix = "tc_";
 val class_prefix = "class_";
 
-fun union_all xss = List.foldl (uncurry (union (op =))) [] xss;
+fun union_all xss = fold (union (op =)) xss []
 
 (* Provide readable names for the more common symbolic functions *)
 val const_trans_table =
@@ -207,7 +215,68 @@
 fun make_type_class clas = class_prefix ^ ascii_of clas;
 
 
-(***** definitions and functions for FOL clauses, for conversion to TPTP or DFG format. *****)
+(**** name pool ****)
+ 
+type name = string * string
+type name_pool = string Symtab.table * string Symtab.table
+
+fun empty_name_pool readable_names =
+  if readable_names then SOME (`I Symtab.empty) else NONE
+
+fun pool_map f xs =
+  fold_rev (fn x => fn (ys, pool) => f x pool |>> (fn y => y :: ys)) xs
+  o pair []
+
+fun add_nice_name full_name nice_prefix j the_pool =
+  let
+    val nice_name = nice_prefix ^ (if j = 0 then "" else "_" ^ Int.toString j)
+  in
+    case Symtab.lookup (snd the_pool) nice_name of
+      SOME full_name' =>
+      if full_name = full_name' then (nice_name, the_pool)
+      else add_nice_name full_name nice_prefix (j + 1) the_pool
+    | NONE =>
+      (nice_name, (Symtab.update_new (full_name, nice_name) (fst the_pool),
+                   Symtab.update_new (nice_name, full_name) (snd the_pool)))
+  end
+
+fun translate_first_char f s =
+  String.str (f (String.sub (s, 0))) ^ String.extract (s, 1, NONE)
+
+fun readable_name full_name s =
+  let
+    val s = s |> Long_Name.base_name
+              |> fold remove_all ["\<^sub>", "\<^bsub>", "\<^esub>", "\<^isub>"]
+    val s' = s |> explode |> rev |> dropwhile (curry (op =) "'")
+    val s' =
+      (s' |> rev
+          |> implode
+          |> String.translate
+                 (fn c => if Char.isAlphaNum c orelse c = #"_" then String.str c
+                          else ""))
+      ^ replicate_string (String.size s - length s') "_"
+    val s' =
+      if s' = "" orelse not (Char.isAlpha (String.sub (s', 0))) then "X" ^ s'
+      else s'
+    val s' = if s' = "op" then full_name else s'
+  in
+    case (Char.isLower (String.sub (full_name, 0)),
+          Char.isLower (String.sub (s', 0))) of
+      (true, false) => translate_first_char Char.toLower s'
+    | (false, true) => translate_first_char Char.toUpper s'
+    | _ => s'
+  end
+
+fun nice_name (full_name, _) NONE = (full_name, NONE)
+  | nice_name (full_name, desired_name) (SOME the_pool) =
+    case Symtab.lookup (fst the_pool) full_name of
+      SOME nice_name => (nice_name, SOME the_pool)
+    | NONE => add_nice_name full_name (readable_name full_name desired_name) 0
+                            the_pool
+              |> apsnd SOME
+
+(**** Definitions and functions for FOL clauses, for conversion to TPTP or DFG
+      format ****)
 
 datatype kind = Axiom | Conjecture;
 
@@ -215,35 +284,24 @@
 
 (**** Isabelle FOL clauses ****)
 
-(*FIXME: give the constructors more sensible names*)
-datatype fol_type = AtomV of string
-                  | AtomF of string
-                  | Comp of string * fol_type list;
+datatype fol_type =
+  TyVar of name |
+  TyFree of name |
+  TyConstr of name * fol_type list
 
-fun string_of_fol_type (AtomV x) = x
-  | string_of_fol_type (AtomF x) = x
-  | string_of_fol_type (Comp(tcon,tps)) =
-      tcon ^ (paren_pack (map string_of_fol_type tps));
+fun string_of_fol_type (TyVar sp) pool = nice_name sp pool
+  | string_of_fol_type (TyFree sp) pool = nice_name sp pool
+  | string_of_fol_type (TyConstr (sp, tys)) pool =
+    let
+      val (s, pool) = nice_name sp pool
+      val (ss, pool) = pool_map string_of_fol_type tys pool
+    in (s ^ paren_pack ss, pool) end
 
 (*First string is the type class; the second is a TVar or TFfree*)
 datatype type_literal = LTVar of string * string | LTFree of string * string;
 
 exception CLAUSE of string * term;
 
-fun atomic_type (TFree (a,_)) = AtomF(make_fixed_type_var a)
-  | atomic_type (TVar (v,_))  = AtomV(make_schematic_type_var v);
-
-(*Flatten a type to a fol_type while accumulating sort constraints on the TFrees and
-  TVars it contains.*)
-fun type_of dfg (Type (a, Ts)) =
-      let val (folTyps, ts) = types_of dfg Ts
-          val t = make_fixed_type_const dfg a
-      in (Comp(t,folTyps), ts) end
-  | type_of dfg T = (atomic_type T, [T])
-and types_of dfg Ts =
-      let val (folTyps,ts) = ListPair.unzip (map (type_of dfg) Ts)
-      in (folTyps, union_all ts) end;
-
 (*Make literals for sorted type variables*)
 fun sorts_on_typs_aux (_, [])   = []
   | sorts_on_typs_aux ((x,i),  s::ss) =
@@ -261,7 +319,7 @@
   | pred_of_sort (LTFree (s,ty)) = (s,1)
 
 (*Given a list of sorted type variables, return a list of type literals.*)
-fun add_typs Ts = List.foldl (uncurry (union (op =))) [] (map sorts_on_typs Ts);
+fun add_typs Ts = fold (union (op =)) (map sorts_on_typs Ts) []
 
 (*The correct treatment of TFrees like 'a in lemmas (axiom clauses) is not clear.
   * Ignoring them leads to unsound proofs, since we do nothing to ensure that 'a
@@ -282,8 +340,6 @@
 
 (**** Isabelle arities ****)
 
-exception ARCLAUSE of string;
-
 datatype arLit = TConsLit of class * string * string list
                | TVarLit of class * string;
 
@@ -321,11 +377,11 @@
 (*Generate all pairs (sub,super) such that sub is a proper subclass of super in theory thy.*)
 fun class_pairs thy [] supers = []
   | class_pairs thy subs supers =
-      let val class_less = Sorts.class_less(Sign.classes_of thy)
-          fun add_super sub (super,pairs) =
-                if class_less (sub,super) then (sub,super)::pairs else pairs
-          fun add_supers (sub,pairs) = List.foldl (add_super sub) pairs supers
-      in  List.foldl add_supers [] subs  end;
+      let
+        val class_less = Sorts.class_less (Sign.classes_of thy)
+        fun add_super sub super = class_less (sub, super) ? cons (sub, super)
+        fun add_supers sub = fold (add_super sub) supers
+      in fold add_supers subs [] end
 
 fun make_classrel_clause (sub,super) =
   ClassrelClause {axiom_name = clrelclause_prefix ^ ascii_of sub ^ "_" ^ ascii_of super,
@@ -350,18 +406,18 @@
           arity_clause dfg (class::seen) n (tcons,ars)
 
 fun multi_arity_clause dfg [] = []
-  | multi_arity_clause dfg ((tcons,ars) :: tc_arlists) =
-      arity_clause dfg [] 1 (tcons, ars)  @  multi_arity_clause dfg tc_arlists
+  | multi_arity_clause dfg ((tcons, ars) :: tc_arlists) =
+      arity_clause dfg [] 1 (tcons, ars) @ multi_arity_clause dfg tc_arlists
 
 (*Generate all pairs (tycon,class,sorts) such that tycon belongs to class in theory thy
   provided its arguments have the corresponding sorts.*)
 fun type_class_pairs thy tycons classes =
   let val alg = Sign.classes_of thy
-      fun domain_sorts (tycon,class) = Sorts.mg_domain alg tycon [class]
-      fun add_class tycon (class,pairs) =
-            (class, domain_sorts(tycon,class))::pairs
-            handle Sorts.CLASS_ERROR _ => pairs
-      fun try_classes tycon = (tycon, List.foldl (add_class tycon) [] classes)
+      fun domain_sorts tycon = Sorts.mg_domain alg tycon o single
+      fun add_class tycon class =
+        cons (class, domain_sorts tycon class)
+        handle Sorts.CLASS_ERROR _ => I
+      fun try_classes tycon = (tycon, fold (add_class tycon) classes [])
   in  map try_classes tycons  end;
 
 (*Proving one (tycon, class) membership may require proving others, so iterate.*)
@@ -383,35 +439,35 @@
 (*FIXME: multiple-arity checking doesn't work, as update_new is the wrong
   function (it flags repeated declarations of a function, even with the same arity)*)
 
-fun update_many (tab, keypairs) = List.foldl (uncurry Symtab.update) tab keypairs;
+fun update_many keypairs = fold Symtab.update keypairs
 
-fun add_type_sort_preds (T, preds) =
-      update_many (preds, map pred_of_sort (sorts_on_typs T));
+val add_type_sort_preds = update_many o map pred_of_sort o sorts_on_typs
 
-fun add_classrel_clause_preds (ClassrelClause {subclass,superclass,...}, preds) =
-  Symtab.update (subclass,1) (Symtab.update (superclass,1) preds);
+fun add_classrel_clause_preds (ClassrelClause {subclass, superclass, ...}) =
+  Symtab.update (subclass, 1) #> Symtab.update (superclass, 1)
 
 fun class_of_arityLit (TConsLit (tclass, _, _)) = tclass
   | class_of_arityLit (TVarLit (tclass, _)) = tclass;
 
-fun add_arity_clause_preds (ArityClause {conclLit,premLits,...}, preds) =
-  let val classes = map (make_type_class o class_of_arityLit) (conclLit::premLits)
-      fun upd (class,preds) = Symtab.update (class,1) preds
-  in  List.foldl upd preds classes  end;
+fun add_arity_clause_preds (ArityClause {conclLit, premLits, ...}) =
+  let
+    val classes = map (make_type_class o class_of_arityLit)
+                      (conclLit :: premLits)
+  in fold (Symtab.update o rpair 1) classes end;
 
 (*** Find occurrences of functions in clauses ***)
 
-fun add_foltype_funcs (AtomV _, funcs) = funcs
-  | add_foltype_funcs (AtomF a, funcs) = Symtab.update (a,0) funcs
-  | add_foltype_funcs (Comp(a,tys), funcs) =
-      List.foldl add_foltype_funcs (Symtab.update (a, length tys) funcs) tys;
+fun add_fol_type_funcs (TyVar _) = I
+  | add_fol_type_funcs (TyFree (s, _)) = Symtab.update (s, 0)
+  | add_fol_type_funcs (TyConstr ((s, _), tys)) =
+    Symtab.update (s, length tys) #> fold add_fol_type_funcs tys
 
 (*TFrees are recorded as constants*)
 fun add_type_sort_funcs (TVar _, funcs) = funcs
   | add_type_sort_funcs (TFree (a, _), funcs) =
       Symtab.update (make_fixed_type_var a, 0) funcs
 
-fun add_arity_clause_funcs (ArityClause {conclLit,...}, funcs) =
+fun add_arity_clause_funcs (ArityClause {conclLit,...}) funcs =
   let val TConsLit (_, tcons, tvars) = conclLit
   in  Symtab.update (tcons, length tvars) funcs  end;
 
@@ -495,22 +551,24 @@
 
 (**** Produce TPTP files ****)
 
-(*Attach sign in TPTP syntax: false means negate.*)
 fun tptp_sign true s = s
   | tptp_sign false s = "~ " ^ s
 
-fun tptp_of_typeLit pos (LTVar (s,ty))  = tptp_sign pos (s ^ "(" ^ ty ^ ")")
-  | tptp_of_typeLit pos (LTFree (s,ty)) = tptp_sign pos  (s ^ "(" ^ ty ^ ")");
+fun tptp_of_typeLit pos (LTVar (s, ty))  = tptp_sign pos (s ^ "(" ^ ty ^ ")")
+  | tptp_of_typeLit pos (LTFree (s, ty)) = tptp_sign pos (s ^ "(" ^ ty ^ ")")
+
+fun tptp_cnf name kind formula =
+  "cnf(" ^ name ^ ", " ^ kind ^ ",\n    " ^ formula ^ ").\n"
 
-fun gen_tptp_cls (cls_id,ax_name,Axiom,lits,tylits) =
-      "cnf(" ^ string_of_clausename (cls_id,ax_name) ^ ",axiom," ^ 
-               tptp_pack (tylits@lits) ^ ").\n"
-  | gen_tptp_cls (cls_id,ax_name,Conjecture,lits,_) =
-      "cnf(" ^ string_of_clausename (cls_id,ax_name) ^ ",negated_conjecture," ^ 
-               tptp_pack lits ^ ").\n";
+fun gen_tptp_cls (cls_id, ax_name, Axiom, lits, tylits) =
+      tptp_cnf (string_of_clausename (cls_id, ax_name)) "axiom"
+               (tptp_pack (tylits @ lits))
+  | gen_tptp_cls (cls_id, ax_name, Conjecture, lits, _) =
+      tptp_cnf (string_of_clausename (cls_id, ax_name)) "negated_conjecture"
+               (tptp_pack lits)
 
 fun tptp_tfree_clause tfree_lit =
-    "cnf(" ^ "tfree_tcs," ^ "negated_conjecture" ^ "," ^ tptp_pack[tfree_lit] ^ ").\n";
+    tptp_cnf "tfree_tcs" "negated_conjecture" (tptp_pack [tfree_lit])
 
 fun tptp_of_arLit (TConsLit (c,t,args)) =
       tptp_sign true (make_type_class c ^ "(" ^ t ^ paren_pack args ^ ")")
@@ -518,14 +576,14 @@
       tptp_sign false (make_type_class c ^ "(" ^ str ^ ")")
 
 fun tptp_arity_clause (ArityClause{axiom_name,conclLit,premLits,...}) =
-  "cnf(" ^ string_of_ar axiom_name ^ ",axiom," ^
-  tptp_pack (map tptp_of_arLit (conclLit :: premLits)) ^ ").\n";
+  tptp_cnf (string_of_ar axiom_name) "axiom"
+           (tptp_pack (map tptp_of_arLit (conclLit :: premLits)))
 
 fun tptp_classrelLits sub sup =
   let val tvar = "(T)"
   in  tptp_pack [tptp_sign false (sub^tvar), tptp_sign true (sup^tvar)]  end;
 
 fun tptp_classrel_clause (ClassrelClause {axiom_name,subclass,superclass,...}) =
-  "cnf(" ^ axiom_name ^ ",axiom," ^ tptp_classrelLits subclass superclass ^ ").\n"
+  tptp_cnf axiom_name "axiom" (tptp_classrelLits subclass superclass)
 
 end;
--- a/src/HOL/Tools/Sledgehammer/sledgehammer_hol_clause.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/Sledgehammer/sledgehammer_hol_clause.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -6,6 +6,7 @@
 
 signature SLEDGEHAMMER_HOL_CLAUSE =
 sig
+  type name = Sledgehammer_FOL_Clause.name
   type kind = Sledgehammer_FOL_Clause.kind
   type fol_type = Sledgehammer_FOL_Clause.fol_type
   type classrel_clause = Sledgehammer_FOL_Clause.classrel_clause
@@ -15,8 +16,8 @@
   type hol_clause_id = int
 
   datatype combterm =
-    CombConst of string * fol_type * fol_type list (* Const and Free *) |
-    CombVar of string * fol_type |
+    CombConst of name * fol_type * fol_type list (* Const and Free *) |
+    CombVar of name * fol_type |
     CombApp of combterm * combterm
   datatype literal = Literal of polarity * combterm
   datatype hol_clause =
@@ -33,35 +34,35 @@
   val get_helper_clauses : bool -> theory -> bool ->
        hol_clause list * (thm * (axiom_name * hol_clause_id)) list * string list ->
        hol_clause list
-  val write_tptp_file : bool -> Path.T ->
+  val write_tptp_file : bool -> bool -> bool -> Path.T ->
     hol_clause list * hol_clause list * hol_clause list * hol_clause list *
-    classrel_clause list * arity_clause list ->
-    int * int
-  val write_dfg_file : bool -> Path.T ->
+    classrel_clause list * arity_clause list -> unit
+  val write_dfg_file : bool -> bool -> Path.T ->
     hol_clause list * hol_clause list * hol_clause list * hol_clause list *
-    classrel_clause list * arity_clause list -> int * int
+    classrel_clause list * arity_clause list -> unit
 end
 
 structure Sledgehammer_HOL_Clause : SLEDGEHAMMER_HOL_CLAUSE =
 struct
 
+open Sledgehammer_Util
 open Sledgehammer_FOL_Clause
 open Sledgehammer_Fact_Preprocessor
 
 (* Parameter "full_types" below indicates that full type information is to be
-exported *)
+   exported.
 
-(* If true, each function will be directly applied to as many arguments as
-   possible, avoiding use of the "apply" operator. Use of hBOOL is also
-   minimized. *)
-val minimize_applies = true;
+   If "explicit_apply" is false, each function will be directly applied to as
+   many arguments as possible, avoiding use of the "apply" operator. Use of
+   hBOOL is also minimized.
+*)
 
 fun min_arity_of const_min_arity c = the_default 0 (Symtab.lookup const_min_arity c);
 
 (*True if the constant ever appears outside of the top-level position in literals.
   If false, the constant always receives all of its arguments and is used as a predicate.*)
-fun needs_hBOOL const_needs_hBOOL c =
-  not minimize_applies orelse
+fun needs_hBOOL explicit_apply const_needs_hBOOL c =
+  explicit_apply orelse
     the_default false (Symtab.lookup const_needs_hBOOL c);
 
 
@@ -74,8 +75,8 @@
 type hol_clause_id = int;
 
 datatype combterm =
-  CombConst of string * fol_type * fol_type list (* Const and Free *) |
-  CombVar of string * fol_type |
+  CombConst of name * fol_type * fol_type list (* Const and Free *) |
+  CombVar of name * fol_type |
   CombApp of combterm * combterm
 
 datatype literal = Literal of polarity * combterm;
@@ -89,11 +90,11 @@
 (* convert a clause with type Term.term to a clause with type clause *)
 (*********************************************************************)
 
-fun isFalse (Literal(pol, CombConst(c,_,_))) =
+fun isFalse (Literal (pol, CombConst ((c, _), _, _))) =
       (pol andalso c = "c_False") orelse (not pol andalso c = "c_True")
   | isFalse _ = false;
 
-fun isTrue (Literal (pol, CombConst(c,_,_))) =
+fun isTrue (Literal (pol, CombConst ((c, _), _, _))) =
       (pol andalso c = "c_True") orelse
       (not pol andalso c = "c_False")
   | isTrue _ = false;
@@ -101,19 +102,22 @@
 fun isTaut (HOLClause {literals,...}) = exists isTrue literals;
 
 fun type_of dfg (Type (a, Ts)) =
-      let val (folTypes,ts) = types_of dfg Ts
-      in  (Comp(make_fixed_type_const dfg a, folTypes), ts)  end
-  | type_of _ (tp as TFree (a, _)) = (AtomF (make_fixed_type_var a), [tp])
-  | type_of _ (tp as TVar (v, _)) = (AtomV (make_schematic_type_var v), [tp])
+    let val (folTypes,ts) = types_of dfg Ts in
+      (TyConstr (`(make_fixed_type_const dfg) a, folTypes), ts)
+    end
+  | type_of _ (tp as TFree (a, _)) = (TyFree (`make_fixed_type_var a), [tp])
+  | type_of _ (tp as TVar (x, _)) =
+    (TyVar (make_schematic_type_var x, string_of_indexname x), [tp])
 and types_of dfg Ts =
       let val (folTyps,ts) = ListPair.unzip (map (type_of dfg) Ts)
       in  (folTyps, union_all ts)  end;
 
 (* same as above, but no gathering of sort information *)
 fun simp_type_of dfg (Type (a, Ts)) =
-      Comp(make_fixed_type_const dfg a, map (simp_type_of dfg) Ts)
-  | simp_type_of _ (TFree (a, _)) = AtomF (make_fixed_type_var a)
-  | simp_type_of _ (TVar (v, _)) = AtomV (make_schematic_type_var v);
+      TyConstr (`(make_fixed_type_const dfg) a, map (simp_type_of dfg) Ts)
+  | simp_type_of _ (TFree (a, _)) = TyFree (`make_fixed_type_var a)
+  | simp_type_of _ (TVar (x, _)) =
+    TyVar (make_schematic_type_var x, string_of_indexname x);
 
 
 fun const_type_of dfg thy (c,t) =
@@ -123,15 +127,15 @@
 (* convert a Term.term (with combinators) into a combterm, also accummulate sort info *)
 fun combterm_of dfg thy (Const(c,t)) =
       let val (tp,ts,tvar_list) = const_type_of dfg thy (c,t)
-          val c' = CombConst(make_fixed_const dfg c, tp, tvar_list)
+          val c' = CombConst (`(make_fixed_const dfg) c, tp, tvar_list)
       in  (c',ts)  end
   | combterm_of dfg _ (Free(v,t)) =
       let val (tp,ts) = type_of dfg t
-          val v' = CombConst(make_fixed_var v, tp, [])
+          val v' = CombConst (`make_fixed_var v, tp, [])
       in  (v',ts)  end
   | combterm_of dfg _ (Var(v,t)) =
       let val (tp,ts) = type_of dfg t
-          val v' = CombVar(make_schematic_var v,tp)
+          val v' = CombVar ((make_schematic_var v, string_of_indexname v), tp)
       in  (v',ts)  end
   | combterm_of dfg thy (P $ Q) =
       let val (P',tsP) = combterm_of dfg thy P
@@ -169,13 +173,13 @@
     end;
 
 
-fun add_axiom_clause dfg thy ((th,(name,id)), pairs) =
-  let val cls = make_clause dfg thy (id, name, Axiom, th)
-  in
-      if isTaut cls then pairs else (name,cls)::pairs
-  end;
+fun add_axiom_clause dfg thy (th, (name, id)) =
+  let val cls = make_clause dfg thy (id, name, Axiom, th) in
+    not (isTaut cls) ? cons (name, cls)
+  end
 
-fun make_axiom_clauses dfg thy = List.foldl (add_axiom_clause dfg thy) [];
+fun make_axiom_clauses dfg thy clauses =
+  fold (add_axiom_clause dfg thy) clauses []
 
 fun make_conjecture_clauses_aux _ _ _ [] = []
   | make_conjecture_clauses_aux dfg thy n (th::ths) =
@@ -192,8 +196,8 @@
 (**********************************************************************)
 
 (*Result of a function type; no need to check that the argument type matches.*)
-fun result_type (Comp ("tc_fun", [_, tp2])) = tp2
-  | result_type _ = error "result_type"
+fun result_type (TyConstr (("tc_fun", _), [_, tp2])) = tp2
+  | result_type _ = raise Fail "Non-function type"
 
 fun type_of_combterm (CombConst (_, tp, _)) = tp
   | type_of_combterm (CombVar (_, tp)) = tp
@@ -207,11 +211,24 @@
 
 val type_wrapper = "ti";
 
-fun head_needs_hBOOL const_needs_hBOOL (CombConst(c,_,_)) = needs_hBOOL const_needs_hBOOL c
-  | head_needs_hBOOL _ _ = true;
+fun head_needs_hBOOL explicit_apply const_needs_hBOOL
+                     (CombConst ((c, _), _, _)) =
+    needs_hBOOL explicit_apply const_needs_hBOOL c
+  | head_needs_hBOOL _ _ _ = true
 
-fun wrap_type full_types (s, tp) =
-  if full_types then type_wrapper ^ paren_pack [s, string_of_fol_type tp] else s;
+fun wrap_type full_types (s, ty) pool =
+  if full_types then
+    let val (s', pool) = string_of_fol_type ty pool in
+      (type_wrapper ^ paren_pack [s, s'], pool)
+    end
+  else
+    (s, pool)
+
+fun wrap_type_if full_types explicit_apply cnh (head, s, tp) =
+  if head_needs_hBOOL explicit_apply cnh head then
+    wrap_type full_types (s, tp)
+  else
+    pair s
 
 fun apply ss = "hAPP" ^ paren_pack ss;
 
@@ -220,145 +237,166 @@
 
 fun string_apply (v, args) = rev_apply (v, rev args);
 
-(*Apply an operator to the argument strings, using either the "apply" operator or
-  direct function application.*)
-fun string_of_applic full_types cma (CombConst (c, _, tvars), args) =
-      let val c = if c = "equal" then "c_fequal" else c
-          val nargs = min_arity_of cma c
-          val args1 = List.take(args, nargs)
-            handle Subscript => error ("string_of_applic: " ^ c ^ " has arity " ^
-                                         Int.toString nargs ^ " but is applied to " ^
-                                         space_implode ", " args)
-          val args2 = List.drop(args, nargs)
-          val targs = if full_types then [] else map string_of_fol_type tvars
-      in
-          string_apply (c ^ paren_pack (args1@targs), args2)
-      end
-  | string_of_applic _ _ (CombVar (v, _), args) = string_apply (v, args)
-  | string_of_applic _ _ _ = error "string_of_applic";
+(* Apply an operator to the argument strings, using either the "apply" operator
+   or direct function application. *)
+fun string_of_application full_types cma (CombConst ((s, s'), _, tvars), args)
+                          pool =
+    let
+      val s = if s = "equal" then "c_fequal" else s
+      val nargs = min_arity_of cma s
+      val args1 = List.take (args, nargs)
+        handle Subscript =>
+               raise Fail (quote s ^ " has arity " ^ Int.toString nargs ^
+                           " but is applied to " ^ commas (map quote args))
+      val args2 = List.drop (args, nargs)
+      val (targs, pool) = if full_types then ([], pool)
+                          else pool_map string_of_fol_type tvars pool
+      val (s, pool) = nice_name (s, s') pool
+    in (string_apply (s ^ paren_pack (args1 @ targs), args2), pool) end
+  | string_of_application _ _ (CombVar (name, _), args) pool =
+    nice_name name pool |>> (fn s => string_apply (s, args))
 
-fun wrap_type_if full_types cnh (head, s, tp) =
-  if head_needs_hBOOL cnh head then wrap_type full_types (s, tp) else s;
-
-fun string_of_combterm (params as (full_types, cma, cnh)) t =
-  let val (head, args) = strip_combterm_comb t
-  in  wrap_type_if full_types cnh (head,
-          string_of_applic full_types cma
-                           (head, map (string_of_combterm (params)) args),
-          type_of_combterm t)
-  end;
+fun string_of_combterm (params as (full_types, explicit_apply, cma, cnh)) t
+                       pool =
+  let
+    val (head, args) = strip_combterm_comb t
+    val (ss, pool) = pool_map (string_of_combterm params) args pool
+    val (s, pool) = string_of_application full_types cma (head, ss) pool
+  in
+    wrap_type_if full_types explicit_apply cnh (head, s, type_of_combterm t)
+                 pool
+  end
 
 (*Boolean-valued terms are here converted to literals.*)
-fun boolify params t =
-  "hBOOL" ^ paren_pack [string_of_combterm params t];
+fun boolify params c =
+  string_of_combterm params c #>> prefix "hBOOL" o paren_pack o single
 
-fun string_of_predicate (params as (_,_,cnh)) t =
+fun string_of_predicate (params as (_, explicit_apply, _, cnh)) t =
   case t of
-      (CombApp(CombApp(CombConst("equal",_,_), t1), t2)) =>
-          (*DFG only: new TPTP prefers infix equality*)
-          ("equal" ^ paren_pack [string_of_combterm params t1, string_of_combterm params t2])
-    | _ =>
-          case #1 (strip_combterm_comb t) of
-              CombConst(c,_,_) => if needs_hBOOL cnh c then boolify params t else string_of_combterm params t
-            | _ => boolify params t;
+    (CombApp (CombApp (CombConst (("equal", _), _, _), t1), t2)) =>
+    (* DFG only: new TPTP prefers infix equality *)
+    pool_map (string_of_combterm params) [t1, t2]
+    #>> prefix "equal" o paren_pack
+  | _ =>
+    case #1 (strip_combterm_comb t) of
+      CombConst ((s, _), _, _) =>
+      (if needs_hBOOL explicit_apply cnh s then boolify else string_of_combterm)
+          params t
+    | _ => boolify params t
 
 
-(*** tptp format ***)
+(*** TPTP format ***)
 
-fun tptp_of_equality params pol (t1,t2) =
-  let val eqop = if pol then " = " else " != "
-  in  string_of_combterm params t1 ^ eqop ^ string_of_combterm params t2  end;
-
-fun tptp_literal params (Literal(pol, CombApp(CombApp(CombConst("equal", _, _), t1), t2))) =
-      tptp_of_equality params pol (t1,t2)
-  | tptp_literal params (Literal(pol,pred)) =
-      tptp_sign pol (string_of_predicate params pred);
+fun tptp_of_equality params pos (t1, t2) =
+  pool_map (string_of_combterm params) [t1, t2]
+  #>> space_implode (if pos then " = " else " != ")
 
-(*Given a clause, returns its literals paired with a list of literals concerning TFrees;
-  the latter should only occur in conjecture clauses.*)
-fun tptp_type_lits params pos (HOLClause {literals, ctypes_sorts, ...}) =
-      (map (tptp_literal params) literals, 
-       map (tptp_of_typeLit pos) (add_typs ctypes_sorts));
+fun tptp_literal params
+        (Literal (pos, CombApp (CombApp (CombConst (("equal", _), _, _), t1),
+                                         t2))) =
+    tptp_of_equality params pos (t1, t2)
+  | tptp_literal params (Literal (pos, pred)) =
+    string_of_predicate params pred #>> tptp_sign pos
+ 
+(* Given a clause, returns its literals paired with a list of literals
+   concerning TFrees; the latter should only occur in conjecture clauses. *)
+fun tptp_type_literals params pos (HOLClause {literals, ctypes_sorts, ...}) =
+  pool_map (tptp_literal params) literals
+  #>> rpair (map (tptp_of_typeLit pos) (add_typs ctypes_sorts))
 
-fun clause2tptp params (cls as HOLClause {axiom_name, clause_id, kind, ...}) =
-  let val (lits,tylits) = tptp_type_lits params (kind = Conjecture) cls
+fun tptp_clause params (cls as HOLClause {axiom_name, clause_id, kind, ...})
+                pool =
+let
+    val ((lits, tylits), pool) =
+      tptp_type_literals params (kind = Conjecture) cls pool
   in
-      (gen_tptp_cls (clause_id, axiom_name, kind, lits, tylits), tylits)
-  end;
+    ((gen_tptp_cls (clause_id, axiom_name, kind, lits, tylits), tylits), pool)
+  end
 
 
-(*** dfg format ***)
+(*** DFG format ***)
 
-fun dfg_literal params (Literal(pol,pred)) = dfg_sign pol (string_of_predicate params pred);
+fun dfg_literal params (Literal (pos, pred)) =
+  string_of_predicate params pred #>> dfg_sign pos
 
-fun dfg_type_lits params pos (HOLClause {literals, ctypes_sorts, ...}) =
-      (map (dfg_literal params) literals, 
-       map (dfg_of_typeLit pos) (add_typs ctypes_sorts));
+fun dfg_type_literals params pos (HOLClause {literals, ctypes_sorts, ...}) =
+  pool_map (dfg_literal params) literals
+  #>> rpair (map (dfg_of_typeLit pos) (add_typs ctypes_sorts))
 
-fun get_uvars (CombConst _) vars = vars
-  | get_uvars (CombVar(v,_)) vars = (v::vars)
-  | get_uvars (CombApp(P,Q)) vars = get_uvars P (get_uvars Q vars);
+fun get_uvars (CombConst _) vars pool = (vars, pool)
+  | get_uvars (CombVar (name, _)) vars pool =
+    nice_name name pool |>> (fn var => var :: vars)
+  | get_uvars (CombApp (P, Q)) vars pool =
+    let val (vars, pool) = get_uvars P vars pool in get_uvars Q vars pool end
 
-fun get_uvars_l (Literal(_,c)) = get_uvars c [];
+fun get_uvars_l (Literal (_, c)) = get_uvars c [];
 
-fun dfg_vars (HOLClause {literals,...}) = union_all (map get_uvars_l literals);
+fun dfg_vars (HOLClause {literals, ...}) =
+  pool_map get_uvars_l literals #>> union_all
 
-fun clause2dfg params (cls as HOLClause {axiom_name, clause_id, kind,
-                                         ctypes_sorts, ...}) =
-  let val (lits,tylits) = dfg_type_lits params (kind = Conjecture) cls
-      val vars = dfg_vars cls
-      val tvars = get_tvar_strs ctypes_sorts
+fun dfg_clause params (cls as HOLClause {axiom_name, clause_id, kind,
+                                         ctypes_sorts, ...}) pool =
+  let
+    val ((lits, tylits), pool) =
+      dfg_type_literals params (kind = Conjecture) cls pool
+    val (vars, pool) = dfg_vars cls pool
+    val tvars = get_tvar_strs ctypes_sorts
   in
-      (gen_dfg_cls (clause_id, axiom_name, kind, lits, tylits, tvars@vars), tylits)
-  end;
+    ((gen_dfg_cls (clause_id, axiom_name, kind, lits, tylits, tvars @ vars),
+      tylits), pool)
+  end
 
 
 (** For DFG format: accumulate function and predicate declarations **)
 
-fun addtypes tvars tab = List.foldl add_foltype_funcs tab tvars;
+fun add_types tvars = fold add_fol_type_funcs tvars
 
-fun add_decls (full_types, cma, cnh) (CombConst (c, _, tvars), (funcs, preds)) =
-      if c = "equal" then (addtypes tvars funcs, preds)
+fun add_decls (full_types, explicit_apply, cma, cnh)
+              (CombConst ((c, _), _, tvars)) (funcs, preds) =
+      if c = "equal" then
+        (add_types tvars funcs, preds)
       else
         let val arity = min_arity_of cma c
             val ntys = if not full_types then length tvars else 0
             val addit = Symtab.update(c, arity+ntys)
         in
-            if needs_hBOOL cnh c then (addtypes tvars (addit funcs), preds)
-            else (addtypes tvars funcs, addit preds)
+            if needs_hBOOL explicit_apply cnh c then
+              (add_types tvars (addit funcs), preds)
+            else
+              (add_types tvars funcs, addit preds)
         end
-  | add_decls _ (CombVar(_,ctp), (funcs,preds)) =
-      (add_foltype_funcs (ctp,funcs), preds)
-  | add_decls params (CombApp(P,Q),decls) = add_decls params (P,add_decls params (Q,decls));
+  | add_decls _ (CombVar (_, ctp)) (funcs, preds) =
+      (add_fol_type_funcs ctp funcs, preds)
+  | add_decls params (CombApp (P, Q)) decls =
+      decls |> add_decls params P |> add_decls params Q
 
-fun add_literal_decls params (Literal (_,c), decls) = add_decls params (c,decls);
+fun add_literal_decls params (Literal (_, c)) = add_decls params c
 
-fun add_clause_decls params (HOLClause {literals, ...}, decls) =
-    List.foldl (add_literal_decls params) decls literals
+fun add_clause_decls params (HOLClause {literals, ...}) decls =
+    fold (add_literal_decls params) literals decls
     handle Symtab.DUP a => error ("function " ^ a ^ " has multiple arities")
 
 fun decls_of_clauses params clauses arity_clauses =
-  let val init_functab = Symtab.update (type_wrapper,2) (Symtab.update ("hAPP",2) init_functab)
-      val init_predtab = Symtab.update ("hBOOL",1) Symtab.empty
-      val (functab,predtab) = (List.foldl (add_clause_decls params) (init_functab, init_predtab) clauses)
-  in
-      (Symtab.dest (List.foldl add_arity_clause_funcs functab arity_clauses),
-       Symtab.dest predtab)
-  end;
+  let val functab =
+        init_functab |> fold Symtab.update [(type_wrapper, 2), ("hAPP", 2),
+                                            ("tc_bool", 0)]
+      val predtab = Symtab.empty |> Symtab.update ("hBOOL", 1)
+      val (functab, predtab) =
+        (functab, predtab) |> fold (add_clause_decls params) clauses
+                           |>> fold add_arity_clause_funcs arity_clauses
+  in (Symtab.dest functab, Symtab.dest predtab) end
 
-fun add_clause_preds (HOLClause {ctypes_sorts, ...}, preds) =
-  List.foldl add_type_sort_preds preds ctypes_sorts
+fun add_clause_preds (HOLClause {ctypes_sorts, ...}) preds =
+  fold add_type_sort_preds ctypes_sorts preds
   handle Symtab.DUP a => error ("predicate " ^ a ^ " has multiple arities")
 
 (*Higher-order clauses have only the predicates hBOOL and type classes.*)
 fun preds_of_clauses clauses clsrel_clauses arity_clauses =
-    Symtab.dest
-        (List.foldl add_classrel_clause_preds
-               (List.foldl add_arity_clause_preds
-                      (List.foldl add_clause_preds Symtab.empty clauses)
-                      arity_clauses)
-               clsrel_clauses)
-
+  Symtab.empty
+  |> fold add_clause_preds clauses
+  |> fold add_arity_clause_preds arity_clauses
+  |> fold add_classrel_clause_preds clsrel_clauses
+  |> Symtab.dest
 
 (**********************************************************************)
 (* write clauses to files                                             *)
@@ -368,22 +406,19 @@
   Symtab.make [("c_COMBI", 0), ("c_COMBK", 0), ("c_COMBB", 0), ("c_COMBC", 0),
                ("c_COMBS", 0)];
 
-fun count_combterm (CombConst (c, _, _), ct) =
-     (case Symtab.lookup ct c of NONE => ct  (*no counter*)
-                               | SOME n => Symtab.update (c,n+1) ct)
-  | count_combterm (CombVar _, ct) = ct
-  | count_combterm (CombApp(t1,t2), ct) = count_combterm(t1, count_combterm(t2, ct));
+fun count_combterm (CombConst ((c, _), _, _)) =
+    Symtab.map_entry c (Integer.add 1)
+  | count_combterm (CombVar _) = I
+  | count_combterm (CombApp (t1, t2)) = count_combterm t1 #> count_combterm t2
 
-fun count_literal (Literal(_,t), ct) = count_combterm(t,ct);
+fun count_literal (Literal (_, t)) = count_combterm t
 
-fun count_clause (HOLClause {literals, ...}, ct) =
-  List.foldl count_literal ct literals;
+fun count_clause (HOLClause {literals, ...}) = fold count_literal literals
 
-fun count_user_clause user_lemmas (HOLClause {axiom_name, literals, ...}, ct) =
-  if axiom_name mem_string user_lemmas then List.foldl count_literal ct literals
-  else ct;
+fun count_user_clause user_lemmas (HOLClause {axiom_name, literals, ...}) =
+  member (op =) user_lemmas axiom_name ? fold count_literal literals
 
-fun cnf_helper_thms thy = cnf_rules_pairs thy o map pairname
+fun cnf_helper_thms thy = cnf_rules_pairs thy o map (`Thm.get_name_hint)
 
 fun get_helper_clauses dfg thy isFO (conjectures, axcls, user_lemmas) =
   if isFO then
@@ -391,8 +426,8 @@
   else
     let
         val axclauses = map #2 (make_axiom_clauses dfg thy axcls)
-        val ct0 = List.foldl count_clause init_counters conjectures
-        val ct = List.foldl (count_user_clause user_lemmas) ct0 axclauses
+        val ct = init_counters |> fold count_clause conjectures
+                               |> fold (count_user_clause user_lemmas) axclauses
         fun needed c = the (Symtab.lookup ct c) > 0
         val IK = if needed "c_COMBI" orelse needed "c_COMBK"
                  then cnf_helper_thms thy [@{thm COMBI_def}, @{thm COMBK_def}]
@@ -413,15 +448,16 @@
 fun count_constants_term toplev t (const_min_arity, const_needs_hBOOL) =
   let val (head, args) = strip_combterm_comb t
       val n = length args
-      val (const_min_arity, const_needs_hBOOL) = fold (count_constants_term false) args (const_min_arity, const_needs_hBOOL)
+      val (const_min_arity, const_needs_hBOOL) =
+        (const_min_arity, const_needs_hBOOL)
+        |> fold (count_constants_term false) args
   in
       case head of
-          CombConst (a,_,_) => (*predicate or function version of "equal"?*)
+          CombConst ((a, _),_,_) => (*predicate or function version of "equal"?*)
             let val a = if a="equal" andalso not toplev then "c_fequal" else a
-            val const_min_arity = Symtab.map_default (a, n) (Integer.min n) const_min_arity
             in
-              if toplev then (const_min_arity, const_needs_hBOOL)
-              else (const_min_arity, Symtab.update (a,true) (const_needs_hBOOL))
+              (const_min_arity |> Symtab.map_default (a, n) (Integer.min n),
+               const_needs_hBOOL |> not toplev ? Symtab.update (a, true))
             end
         | _ => (const_min_arity, const_needs_hBOOL)
   end;
@@ -434,79 +470,100 @@
                            (const_min_arity, const_needs_hBOOL) =
   fold count_constants_lit literals (const_min_arity, const_needs_hBOOL);
 
-fun display_arity const_needs_hBOOL (c,n) =
+fun display_arity explicit_apply const_needs_hBOOL (c,n) =
   trace_msg (fn () => "Constant: " ^ c ^
                 " arity:\t" ^ Int.toString n ^
-                (if needs_hBOOL const_needs_hBOOL c then " needs hBOOL" else ""));
+                (if needs_hBOOL explicit_apply const_needs_hBOOL c then
+                   " needs hBOOL"
+                 else
+                   ""))
 
-fun count_constants (conjectures, _, extra_clauses, helper_clauses, _, _) =
-  if minimize_applies then
+fun count_constants explicit_apply
+                    (conjectures, _, extra_clauses, helper_clauses, _, _) =
+  if not explicit_apply then
      let val (const_min_arity, const_needs_hBOOL) =
           fold count_constants_clause conjectures (Symtab.empty, Symtab.empty)
        |> fold count_constants_clause extra_clauses
        |> fold count_constants_clause helper_clauses
-     val _ = List.app (display_arity const_needs_hBOOL) (Symtab.dest (const_min_arity))
+     val _ = List.app (display_arity explicit_apply const_needs_hBOOL)
+                      (Symtab.dest (const_min_arity))
      in (const_min_arity, const_needs_hBOOL) end
   else (Symtab.empty, Symtab.empty);
 
+fun header () =
+  "% This file was generated by Isabelle (most likely Sledgehammer)\n" ^
+  "% " ^ timestamp () ^ "\n"
+
 (* TPTP format *)
 
-fun write_tptp_file full_types file clauses =
+fun write_tptp_file readable_names full_types explicit_apply file clauses =
   let
+    fun section _ [] = []
+      | section name ss = "\n% " ^ name ^ plural_s (length ss) ^ "\n" :: ss
+    val pool = empty_name_pool readable_names
     val (conjectures, axclauses, _, helper_clauses,
       classrel_clauses, arity_clauses) = clauses
-    val (cma, cnh) = count_constants clauses
-    val params = (full_types, cma, cnh)
-    val (tptp_clss,tfree_litss) = ListPair.unzip (map (clause2tptp params) conjectures)
-    val tfree_clss = map tptp_tfree_clause (List.foldl (uncurry (union (op =))) [] tfree_litss)
-    val _ =
-      File.write_list file (
-        map (#1 o (clause2tptp params)) axclauses @
-        tfree_clss @
-        tptp_clss @
-        map tptp_classrel_clause classrel_clauses @
-        map tptp_arity_clause arity_clauses @
-        map (#1 o (clause2tptp params)) helper_clauses)
-    in (length axclauses + 1, length tfree_clss + length tptp_clss)
-  end;
+    val (cma, cnh) = count_constants explicit_apply clauses
+    val params = (full_types, explicit_apply, cma, cnh)
+    val ((conjecture_clss, tfree_litss), pool) =
+      pool_map (tptp_clause params) conjectures pool |>> ListPair.unzip
+    val tfree_clss = map tptp_tfree_clause (fold (union (op =)) tfree_litss [])
+    val (ax_clss, pool) = pool_map (apfst fst oo tptp_clause params) axclauses
+                                   pool
+    val classrel_clss = map tptp_classrel_clause classrel_clauses
+    val arity_clss = map tptp_arity_clause arity_clauses
+    val (helper_clss, pool) = pool_map (apfst fst oo tptp_clause params)
+                                       helper_clauses pool
+  in
+    File.write_list file
+        (header () ::
+         section "Relevant fact" ax_clss @
+         section "Type variable" tfree_clss @
+         section "Conjecture" conjecture_clss @
+         section "Class relationship" classrel_clss @
+         section "Arity declaration" arity_clss @
+         section "Helper fact" helper_clss)
+  end
 
 
 (* DFG format *)
 
-fun write_dfg_file full_types file clauses =
+fun write_dfg_file full_types explicit_apply file clauses =
   let
+    (* Some of the helper functions below are not name-pool-aware. However,
+       there's no point in adding name pool support if the DFG format is on its
+       way out anyway. *)
+    val pool = NONE
     val (conjectures, axclauses, _, helper_clauses,
       classrel_clauses, arity_clauses) = clauses
-    val (cma, cnh) = count_constants clauses
-    val params = (full_types, cma, cnh)
-    val (dfg_clss, tfree_litss) = ListPair.unzip (map (clause2dfg params) conjectures)
+    val (cma, cnh) = count_constants explicit_apply clauses
+    val params = (full_types, explicit_apply, cma, cnh)
+    val ((conjecture_clss, tfree_litss), pool) =
+      pool_map (dfg_clause params) conjectures pool |>> ListPair.unzip
     and probname = Path.implode (Path.base file)
-    val axstrs = map (#1 o (clause2dfg params)) axclauses
+    val (axstrs, pool) = pool_map (apfst fst oo dfg_clause params) axclauses pool
     val tfree_clss = map dfg_tfree_clause (union_all tfree_litss)
-    val helper_clauses_strs = map (#1 o (clause2dfg params)) helper_clauses
-    val (funcs,cl_preds) = decls_of_clauses params (helper_clauses @ conjectures @ axclauses) arity_clauses
+    val (helper_clauses_strs, pool) =
+      pool_map (apfst fst oo dfg_clause params) helper_clauses pool
+    val (funcs, cl_preds) = decls_of_clauses params (helper_clauses @ conjectures @ axclauses) arity_clauses
     and ty_preds = preds_of_clauses axclauses classrel_clauses arity_clauses
-    val _ =
-      File.write_list file (
-        string_of_start probname ::
-        string_of_descrip probname ::
-        string_of_symbols (string_of_funcs funcs)
-          (string_of_preds (cl_preds @ ty_preds)) ::
-        "list_of_clauses(axioms,cnf).\n" ::
-        axstrs @
-        map dfg_classrel_clause classrel_clauses @
-        map dfg_arity_clause arity_clauses @
-        helper_clauses_strs @
-        ["end_of_list.\n\nlist_of_clauses(conjectures,cnf).\n"] @
-        tfree_clss @
-        dfg_clss @
-        ["end_of_list.\n\n",
-        (*VarWeight=3 helps the HO problems, probably by counteracting the presence of hAPP*)
-         "list_of_settings(SPASS).\n{*\nset_flag(VarWeight,3).\n*}\nend_of_list.\n\n",
-         "end_problem.\n"])
-
-    in (length axclauses + length classrel_clauses + length arity_clauses +
-      length helper_clauses + 1, length tfree_clss + length dfg_clss)
-  end;
+  in
+    File.write_list file
+        (header () ::
+         string_of_start probname ::
+         string_of_descrip probname ::
+         string_of_symbols (string_of_funcs funcs)
+                           (string_of_preds (cl_preds @ ty_preds)) ::
+         "list_of_clauses(axioms, cnf).\n" ::
+         axstrs @
+         map dfg_classrel_clause classrel_clauses @
+         map dfg_arity_clause arity_clauses @
+         helper_clauses_strs @
+         ["end_of_list.\n\nlist_of_clauses(conjectures, cnf).\n"] @
+         tfree_clss @
+         conjecture_clss @
+         ["end_of_list.\n\n",
+          "end_problem.\n"])
+  end
 
 end;
--- a/src/HOL/Tools/Sledgehammer/sledgehammer_isar.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/Sledgehammer/sledgehammer_isar.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -27,23 +27,25 @@
   {add = [], del = ns, only = false}
 fun only_relevance_override ns : relevance_override =
   {add = ns, del = [], only = true}
-val default_relevance_override = add_to_relevance_override []
+val no_relevance_override = add_to_relevance_override []
 fun merge_relevance_override_pairwise (r1 : relevance_override)
                                       (r2 : relevance_override) =
   {add = #add r1 @ #add r2, del = #del r1 @ #del r2,
-   only = #only r1 orelse #only r2}
+   only = #only r1 andalso #only r2}
 fun merge_relevance_overrides rs =
-  fold merge_relevance_override_pairwise rs default_relevance_override
+  fold merge_relevance_override_pairwise rs (only_relevance_override [])
 
 type raw_param = string * string list
 
 val default_default_params =
   [("debug", "false"),
    ("verbose", "false"),
+   ("overlord", "false"),
+   ("explicit_apply", "false"),
    ("respect_no_atp", "true"),
    ("relevance_threshold", "50"),
    ("convergence", "320"),
-   ("theory_const", "smart"),
+   ("theory_relevant", "smart"),
    ("higher_order", "smart"),
    ("follow_defs", "false"),
    ("isar_proof", "false"),
@@ -51,12 +53,16 @@
    ("sorts", "false"),
    ("minimize_timeout", "5 s")]
 
-val negated_params =
+val alias_params =
+  [("atp", "atps")]
+val negated_alias_params =
   [("no_debug", "debug"),
    ("quiet", "verbose"),
+   ("no_overlord", "overlord"),
+   ("implicit_apply", "explicit_apply"),
    ("ignore_no_atp", "respect_no_atp"),
    ("partial_types", "full_types"),
-   ("no_theory_const", "theory_const"),
+   ("theory_irrelevant", "theory_relevant"),
    ("first_order", "higher_order"),
    ("dont_follow_defs", "follow_defs"),
    ("metis_proof", "isar_proof"),
@@ -66,21 +72,25 @@
 
 fun is_known_raw_param s =
   AList.defined (op =) default_default_params s orelse
-  AList.defined (op =) negated_params s orelse
+  AList.defined (op =) alias_params s orelse
+  AList.defined (op =) negated_alias_params s orelse
   member (op =) property_dependent_params s
 
 fun check_raw_param (s, _) =
   if is_known_raw_param s then ()
   else error ("Unknown parameter: " ^ quote s ^ ".")
 
-fun unnegate_raw_param (name, value) =
-  case AList.lookup (op =) negated_params name of
-    SOME name' => (name', case value of
-                            ["false"] => ["true"]
-                          | ["true"] => ["false"]
-                          | [] => ["false"]
-                          | _ => value)
-  | NONE => (name, value)
+fun unalias_raw_param (name, value) =
+  case AList.lookup (op =) alias_params name of
+    SOME name' => (name', value)
+  | NONE =>
+    case AList.lookup (op =) negated_alias_params name of
+      SOME name' => (name', case value of
+                              ["false"] => ["true"]
+                            | ["true"] => ["false"]
+                            | [] => ["false"]
+                            | _ => value)
+    | NONE => (name, value)
 
 structure Data = Theory_Data(
   type T = raw_param list
@@ -88,19 +98,31 @@
   val extend = I
   fun merge p : T = AList.merge (op =) (K true) p)
 
-val set_default_raw_param = Data.map o AList.update (op =) o unnegate_raw_param
+(* Don't even try to start ATPs that are not installed.
+   Hack: Should not rely on naming convention. *)
+val filter_atps =
+  space_explode " "
+  #> filter (fn s => String.isPrefix "remote_" s orelse
+                     getenv (String.map Char.toUpper s ^ "_HOME") <> "")
+  #> space_implode " "
+
+val set_default_raw_param = Data.map o AList.update (op =) o unalias_raw_param
 fun default_raw_params thy =
-  [("atps", [!atps]),
-   ("full_types", [if !full_types then "true" else "false"]),
-   ("timeout", [string_of_int (!timeout) ^ " s"])] @
   Data.get thy
+  |> fold (AList.default (op =))
+          [("atps", [filter_atps (!atps)]),
+           ("full_types", [if !full_types then "true" else "false"]),
+           ("timeout", let val timeout = !timeout in
+                         [if timeout <= 0 then "none"
+                          else string_of_int timeout ^ " s"]
+                       end)]
 
 val infinity_time_in_secs = 60 * 60 * 24 * 365
 val the_timeout = the_default (Time.fromSeconds infinity_time_in_secs)
 
 fun extract_params thy default_params override_params =
   let
-    val override_params = map unnegate_raw_param override_params
+    val override_params = map unalias_raw_param override_params
     val raw_params = rev override_params @ rev default_params
     val lookup = Option.map (space_implode " ") o AList.lookup (op =) raw_params
     val lookup_string = the_default "" o lookup
@@ -123,13 +145,15 @@
                                    " must be assigned an integer value.")
     val debug = lookup_bool "debug"
     val verbose = debug orelse lookup_bool "verbose"
+    val overlord = lookup_bool "overlord"
     val atps = lookup_string "atps" |> space_explode " "
     val full_types = lookup_bool "full_types"
+    val explicit_apply = lookup_bool "explicit_apply"
     val respect_no_atp = lookup_bool "respect_no_atp"
     val relevance_threshold =
       0.01 * Real.fromInt (lookup_int "relevance_threshold")
     val convergence = 0.01 * Real.fromInt (lookup_int "convergence")
-    val theory_const = lookup_bool_option "theory_const"
+    val theory_relevant = lookup_bool_option "theory_relevant"
     val higher_order = lookup_bool_option "higher_order"
     val follow_defs = lookup_bool "follow_defs"
     val isar_proof = lookup_bool "isar_proof"
@@ -138,9 +162,10 @@
     val timeout = lookup_time "timeout"
     val minimize_timeout = lookup_time "minimize_timeout"
   in
-    {debug = debug, verbose = verbose, atps = atps, full_types = full_types,
+    {debug = debug, verbose = verbose, overlord = overlord, atps = atps,
+     full_types = full_types, explicit_apply = explicit_apply,
      respect_no_atp = respect_no_atp, relevance_threshold = relevance_threshold,
-     convergence = convergence, theory_const = theory_const,
+     convergence = convergence, theory_relevant = theory_relevant,
      higher_order = higher_order, follow_defs = follow_defs,
      isar_proof = isar_proof, modulus = modulus, sorts = sorts,
      timeout = timeout, minimize_timeout = minimize_timeout}
@@ -149,7 +174,7 @@
 fun get_params thy = extract_params thy (default_raw_params thy)
 fun default_params thy = get_params thy o map (apsnd single)
 
-fun atp_minimize_command override_params old_style_args i fact_refs state =
+fun minimize override_params old_style_args i fact_refs state =
   let
     val thy = Proof.theory_of state
     val ctxt = Proof.context_of state
@@ -162,27 +187,27 @@
     fun get_time_limit_arg s =
       (case Int.fromString s of
         SOME t => Time.fromSeconds t
-      | NONE => error ("Invalid time limit: " ^ quote s))
+      | NONE => error ("Invalid time limit: " ^ quote s ^ "."))
     fun get_opt (name, a) (p, t) =
       (case name of
         "time" => (p, get_time_limit_arg a)
       | "atp" => (a, t)
-      | n => error ("Invalid argument: " ^ n))
+      | n => error ("Invalid argument: " ^ n ^ "."))
     val {atps, minimize_timeout, ...} = get_params thy override_params
     val (atp, timeout) = fold get_opt old_style_args (hd atps, minimize_timeout)
     val params =
       get_params thy
-          [("atps", [atp]),
-           ("minimize_timeout",
-            [string_of_int (Time.toSeconds timeout) ^ " s"])]
+          (override_params @
+           [("atps", [atp]),
+            ("minimize_timeout",
+             [string_of_int (Time.toMilliseconds timeout) ^ " ms"])])
     val prover =
       (case get_prover thy atp of
         SOME prover => prover
-      | NONE => error ("Unknown ATP: " ^ quote atp))
+      | NONE => error ("Unknown ATP: " ^ quote atp ^ "."))
     val name_thms_pairs = theorems_from_refs ctxt fact_refs
   in
-    writeln (#2 (minimize_theorems params linear_minimize prover atp i state
-                                   name_thms_pairs))
+    priority (#2 (minimize_theorems params prover atp i state name_thms_pairs))
   end
 
 val runN = "run"
@@ -194,9 +219,9 @@
 val refresh_tptpN = "refresh_tptp"
 val helpN = "help"
 
-val addN = "add"
-val delN = "del"
-val onlyN = "only"
+
+fun minimizize_raw_param_name "timeout" = "minimize_timeout"
+  | minimizize_raw_param_name name = name
 
 fun hammer_away override_params subcommand opt_i relevance_override state =
   let
@@ -207,8 +232,8 @@
       sledgehammer (get_params thy override_params) (the_default 1 opt_i)
                    relevance_override state
     else if subcommand = minimizeN then
-      atp_minimize_command override_params [] (the_default 1 opt_i)
-                           (#add relevance_override) state
+      minimize (map (apfst minimizize_raw_param_name) override_params) []
+               (the_default 1 opt_i) (#add relevance_override) state
     else if subcommand = messagesN then
       messages opt_i
     else if subcommand = available_atpsN then
@@ -254,14 +279,11 @@
   (Args.add |-- Args.colon |-- parse_fact_refs >> add_to_relevance_override)
   || (Args.del |-- Args.colon |-- parse_fact_refs
       >> del_from_relevance_override)
-  || (P.$$$ onlyN |-- Args.colon |-- parse_fact_refs >> only_relevance_override)
+  || (parse_fact_refs >> only_relevance_override)
 val parse_relevance_override =
-  Scan.optional (Args.parens
-      (Scan.optional (parse_fact_refs >> only_relevance_override)
-                     default_relevance_override
-       -- Scan.repeat parse_relevance_chunk)
-       >> op :: >> merge_relevance_overrides)
-      default_relevance_override
+  Scan.optional (Args.parens (Scan.repeat parse_relevance_chunk
+                              >> merge_relevance_overrides))
+                (add_to_relevance_override [])
 val parse_sledgehammer_command =
   (Scan.optional P.short_ident runN -- parse_params -- parse_relevance_override
    -- Scan.option P.nat) #>> sledgehammer_trans
@@ -293,8 +315,7 @@
     "minimize theorem list with external prover" K.diag
     (parse_minimize_args -- parse_fact_refs >> (fn (args, fact_refs) =>
       Toplevel.no_timing o Toplevel.unknown_proof o
-        Toplevel.keep (atp_minimize_command [] args 1 fact_refs
-                       o Toplevel.proof_of)))
+        Toplevel.keep (minimize [] args 1 fact_refs o Toplevel.proof_of)))
 
 val _ =
   OuterSyntax.improper_command "sledgehammer"
--- a/src/HOL/Tools/Sledgehammer/sledgehammer_proof_reconstruct.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/Sledgehammer/sledgehammer_proof_reconstruct.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -14,10 +14,15 @@
   val strip_prefix: string -> string -> string option
   val is_proof_well_formed: string -> bool
   val metis_line: int -> int -> string list -> string
-  val metis_lemma_list: bool -> string ->
-    string * string vector * (int * int) * Proof.context * thm * int -> string * string list
-  val structured_isar_proof: int -> bool -> string ->
-    string * string vector * (int * int) * Proof.context * thm * int -> string * string list
+  val metis_proof_text:
+    bool -> bool -> string -> string -> string vector -> Proof.context -> thm
+    -> int -> string * string list
+  val isar_proof_text:
+    bool -> int -> bool -> string -> string -> string vector -> Proof.context
+    -> thm -> int -> string * string list
+  val proof_text:
+    bool -> bool -> int -> bool -> string -> string -> string vector
+    -> Proof.context -> thm -> int -> string * string list
 end;
 
 structure Sledgehammer_Proof_Reconstruct : SLEDGEHAMMER_PROOF_RECONSTRUCT =
@@ -493,45 +498,22 @@
   exists (fn s => String.isSubstring s proof) end_proof_strs
 
 (* === EXTRACTING LEMMAS === *)
-(* lines have the form "cnf(108, axiom, ...",
-the number (108) has to be extracted)*)
-fun get_step_nums false extract =
+(* A list consisting of the first number in each line is returned.
+   TPTP: Interesting lines have the form "cnf(108, axiom, ...)", where the
+   number (108) is extracted.
+   DFG: Lines have the form "108[0:Inp] ...", where the first number (108) is
+   extracted. *)
+fun get_step_nums proof_extract =
   let
     val toks = String.tokens (not o Char.isAlphaNum)
     fun inputno ("cnf" :: ntok :: "axiom" :: _) = Int.fromString ntok
       | inputno ("cnf" :: ntok :: "negated" :: "conjecture" :: _) =
         Int.fromString ntok
+      | inputno (ntok::"0"::"Inp"::_) = Int.fromString ntok (* DFG *)
       | inputno _ = NONE
-    val lines = split_lines extract
+    val lines = split_lines proof_extract
   in map_filter (inputno o toks) lines end
-(*String contains multiple lines. We want those of the form
-  "253[0:Inp] et cetera..."
-  A list consisting of the first number in each line is returned. *)
-|  get_step_nums true proofextract =
-  let val toks = String.tokens (not o Char.isAlphaNum)
-  fun inputno (ntok::"0"::"Inp"::_) = Int.fromString ntok
-    | inputno _ = NONE
-  val lines = split_lines proofextract
-  in  map_filter (inputno o toks) lines  end
   
-(*extracting lemmas from tstp-output between the lines from above*)
-fun extract_lemmas get_step_nums (proof, thm_names, conj_count, _, _, _) =
-  let
-    (* get the names of axioms from their numbers*)
-    fun get_axiom_names thm_names step_nums =
-      let
-        val last_axiom = Vector.length thm_names
-        fun is_axiom n = n <= last_axiom
-        fun is_conj n = n >= fst conj_count andalso
-                        n < fst conj_count + snd conj_count
-        fun getname i = Vector.sub(thm_names, i-1)
-      in
-        (sort_distinct string_ord (filter (fn x => x <> "??.unknown")
-          (map getname (filter is_axiom step_nums))),
-        exists is_conj step_nums)
-      end
-  in get_axiom_names thm_names (get_step_nums (get_proof_extract proof)) end;
-
 (*Used to label theorems chained into the sledgehammer call*)
 val chained_hint = "CHAINED";
 val kill_chained = filter_out (curry (op =) chained_hint)
@@ -546,29 +528,33 @@
 fun metis_line i n xs =
   "Try this command: " ^
   Markup.markup Markup.sendback (metis_command i n xs) ^ ".\n" 
-fun minimize_line _ [] = ""
-  | minimize_line name xs =
+fun minimize_line _ _ [] = ""
+  | minimize_line isar_proof atp_name xs =
       "To minimize the number of lemmas, try this command: " ^
       Markup.markup Markup.sendback
-                    ("sledgehammer minimize [atps = " ^ name ^ "] (" ^
+                    ("sledgehammer minimize [atp = " ^ atp_name ^
+                     (if isar_proof then ", isar_proof" else "") ^ "] (" ^
                      space_implode " " xs ^ ")") ^ ".\n"
 
-fun metis_lemma_list dfg name (result as (_, _, _, _, goal, i)) =
+fun metis_proof_text isar_proof minimal atp_name proof thm_names
+                     (_ : Proof.context) goal i =
   let
-    val (lemmas, used_conj) = extract_lemmas (get_step_nums dfg) result
+    val lemmas =
+      proof |> get_proof_extract
+            |> get_step_nums
+            |> filter (fn i => i <= Vector.length thm_names)
+            |> map (fn i => Vector.sub (thm_names, i - 1))
+            |> filter (fn x => x <> "??.unknown")
+            |> sort_distinct string_ord
     val n = Logic.count_prems (prop_of goal)
     val xs = kill_chained lemmas
   in
-    (metis_line i n xs ^ minimize_line name xs ^
-     (if used_conj then
-        ""
-      else
-        "\nWarning: The goal is provable because the context is inconsistent."),
+    (metis_line i n xs ^
+     (if minimal then "" else minimize_line isar_proof atp_name xs),
      kill_chained lemmas)
-  end;
+  end
 
-fun structured_isar_proof modulus sorts name
-        (result as (proof, thm_names, conj_count, ctxt, goal, i)) =
+fun isar_proof_text minimal modulus sorts atp_name proof thm_names ctxt goal i =
   let
     (* We could use "split_lines", but it can return blank lines. *)
     val lines = String.tokens (equal #"\n");
@@ -576,7 +562,8 @@
       String.translate (fn c => if Char.isSpace c then "" else str c)
     val extract = get_proof_extract proof
     val cnfs = filter (String.isPrefix "cnf(") (map kill_spaces (lines extract))
-    val (one_line_proof, lemma_names) = metis_lemma_list false name result
+    val (one_line_proof, lemma_names) =
+      metis_proof_text true minimal atp_name proof thm_names ctxt goal i
     val tokens = String.tokens (fn c => c = #" ") one_line_proof
     val isar_proof =
       if member (op =) tokens chained_hint then ""
@@ -588,4 +575,8 @@
      lemma_names)
   end
 
+fun proof_text isar_proof minimal modulus sorts =
+  if isar_proof then isar_proof_text minimal modulus sorts
+  else metis_proof_text isar_proof minimal
+
 end;
--- a/src/HOL/Tools/Sledgehammer/sledgehammer_util.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/Sledgehammer/sledgehammer_util.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -6,23 +6,22 @@
 
 signature SLEDGEHAMMER_UTIL =
 sig
+  val plural_s : int -> string
   val serial_commas : string -> string list -> string list
+  val replace_all : string -> string -> string -> string
+  val remove_all : string -> string -> string
+  val timestamp : unit -> string
   val parse_bool_option : bool -> string -> string -> bool option
   val parse_time_option : string -> string -> Time.time option
   val hashw : word * word -> word
   val hashw_char : Char.char * word -> word
   val hashw_string : string * word -> word
 end;
-
+ 
 structure Sledgehammer_Util : SLEDGEHAMMER_UTIL =
 struct
 
-(* This hash function is recommended in Compilers: Principles, Techniques, and
-   Tools, by Aho, Sethi and Ullman. The hashpjw function, which they
-   particularly recommend, triggers a bug in versions of Poly/ML up to 4.2.0. *)
-fun hashw (u, w) = Word.+ (u, Word.* (0w65599, w))
-fun hashw_char (c, w) = hashw (Word.fromInt (Char.ord c), w)
-fun hashw_string (s:string, w) = CharVector.foldl hashw_char w s
+fun plural_s n = if n = 1 then "" else "s"
 
 fun serial_commas _ [] = ["??"]
   | serial_commas _ [s] = [s]
@@ -30,6 +29,20 @@
   | serial_commas conj [s1, s2, s3] = [s1 ^ ",", s2 ^ ",", conj, s3]
   | serial_commas conj (s :: ss) = s ^ "," :: serial_commas conj ss
 
+fun replace_all bef aft =
+  let
+    fun aux seen "" = String.implode (rev seen)
+      | aux seen s =
+        if String.isPrefix bef s then
+          aux seen "" ^ aft ^ aux [] (unprefix bef s)
+        else
+          aux (String.sub (s, 0) :: seen) (String.extract (s, 1, NONE))
+  in aux [] end
+
+fun remove_all bef = replace_all bef ""
+
+val timestamp = Date.fmt "%Y-%m-%d %H:%M:%S" o Date.fromTimeLocal o Time.now
+
 fun parse_bool_option option name s =
   (case s of
      "smart" => if option then NONE else raise Option
@@ -60,4 +73,11 @@
         SOME (Time.fromMilliseconds msecs)
     end
 
+(* This hash function is recommended in Compilers: Principles, Techniques, and
+   Tools, by Aho, Sethi and Ullman. The hashpjw function, which they
+   particularly recommend, triggers a bug in versions of Poly/ML up to 4.2.0. *)
+fun hashw (u, w) = Word.+ (u, Word.* (0w65599, w))
+fun hashw_char (c, w) = hashw (Word.fromInt (Char.ord c), w)
+fun hashw_string (s:string, w) = CharVector.foldl hashw_char w s
+
 end;
--- a/src/HOL/Tools/record.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/record.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -54,9 +54,9 @@
   val print_records: theory -> unit
   val read_typ: Proof.context -> string -> (string * sort) list -> typ * (string * sort) list
   val cert_typ: Proof.context -> typ -> (string * sort) list -> typ * (string * sort) list
-  val add_record: bool -> string list * binding -> (typ list * string) option ->
+  val add_record: bool -> (string * sort) list * binding -> (typ list * string) option ->
     (binding * typ * mixfix) list -> theory -> theory
-  val add_record_cmd: bool -> string list * binding -> string option ->
+  val add_record_cmd: bool -> (string * string option) list * binding -> string option ->
     (binding * string * mixfix) list -> theory -> theory
   val setup: theory -> theory
 end;
@@ -64,7 +64,8 @@
 
 signature ISO_TUPLE_SUPPORT =
 sig
-  val add_iso_tuple_type: bstring * string list -> typ * typ -> theory -> (term * term) * theory
+  val add_iso_tuple_type: bstring * (string * sort) list ->
+    typ * typ -> theory -> (term * term) * theory
   val mk_cons_tuple: term * term -> term
   val dest_cons_tuple: term -> term * term
   val iso_tuple_intros_tac: int -> tactic
@@ -742,7 +743,7 @@
                     val varifyT = varifyT midx;
                     val vartypes = map varifyT types;
 
-                    val subst = fold (Sign.typ_match thy) (vartypes ~~ argtypes) Vartab.empty
+                    val subst = Type.raw_matches (vartypes, argtypes) Vartab.empty
                       handle Type.TYPE_MATCH => err "type is no proper record (extension)";
                     val alphas' =
                       map (Syntax.term_of_typ (! Syntax.show_sorts) o Envir.norm_type subst o varifyT)
@@ -872,11 +873,10 @@
                           apfst (Sign.extern_const thy) f :: map (apfst Long_Name.base_name) fs;
                         val (args', more) = split_last args;
                         val alphavars = map varifyT (but_last alphas);
-                        val subst = fold (Sign.typ_match thy) (alphavars ~~ args') Vartab.empty;
+                        val subst = Type.raw_matches (alphavars, args') Vartab.empty;
                         val fields'' = (map o apsnd) (Envir.norm_type subst o varifyT) fields';
                       in fields'' @ strip_fields more end
-                      handle Type.TYPE_MATCH => [("", T)]
-                        | Library.UnequalLengths => [("", T)])
+                      handle Type.TYPE_MATCH => [("", T)])
                   | _ => [("", T)])
               | _ => [("", T)])
           | _ => [("", T)])
@@ -900,19 +900,18 @@
     val midx = maxidx_of_typ T;
     val varifyT = varifyT midx;
 
-    fun mk_type_abbr subst name alphas =
-      let val abbrT = Type (name, map (fn a => varifyT (TFree (a, Sign.defaultS thy))) alphas) in
-        Syntax.term_of_typ (! Syntax.show_sorts) (Envir.norm_type subst abbrT)
-      end;
-
-    fun match rT T = Sign.typ_match thy (varifyT rT, T) Vartab.empty;
+    fun mk_type_abbr subst name args =
+      let val abbrT = Type (name, map (varifyT o TFree) args)
+      in Syntax.term_of_typ (! Syntax.show_sorts) (Envir.norm_type subst abbrT) end;
+
+    fun match rT T = Type.raw_match (varifyT rT, T) Vartab.empty;
   in
     if ! print_record_type_abbr then
       (case last_extT T of
         SOME (name, _) =>
           if name = last_ext then
             let val subst = match schemeT T in
-              if HOLogic.is_unitT (Envir.norm_type subst (varifyT (TFree (zeta, Sign.defaultS thy))))
+              if HOLogic.is_unitT (Envir.norm_type subst (varifyT (TFree zeta)))
               then mk_type_abbr subst abbr alphas
               else mk_type_abbr subst (suffix schemeN abbr) (alphas @ [zeta])
             end handle Type.TYPE_MATCH => record_type_tr' ctxt tm
@@ -1639,11 +1638,10 @@
     val fields_moreTs = fieldTs @ [moreT];
 
     val alphas_zeta = alphas @ [zeta];
-    val alphas_zetaTs = map (fn a => TFree (a, HOLogic.typeS)) alphas_zeta;
 
     val ext_binding = Binding.name (suffix extN base_name);
     val ext_name = suffix extN name;
-    val extT = Type (suffix ext_typeN name, alphas_zetaTs);
+    val extT = Type (suffix ext_typeN name, map TFree alphas_zeta);
     val ext_type = fields_moreTs ---> extT;
 
 
@@ -1846,10 +1844,8 @@
 
 (* record_definition *)
 
-fun record_definition (args, binding) parent (parents: parent_info list) raw_fields thy =
+fun record_definition (alphas, binding) parent (parents: parent_info list) raw_fields thy =
   let
-    val alphas = map fst args;
-
     val name = Sign.full_name thy binding;
     val full = Sign.full_name_path thy (Binding.name_of binding); (* FIXME Binding.qualified (!?) *)
 
@@ -1869,7 +1865,7 @@
     val fields = map (apfst full) bfields;
     val names = map fst fields;
     val types = map snd fields;
-    val alphas_fields = fold Term.add_tfree_namesT types [];
+    val alphas_fields = fold Term.add_tfreesT types [];
     val alphas_ext = inter (op =) alphas_fields alphas;
     val len = length fields;
     val variants =
@@ -1885,9 +1881,8 @@
     val all_vars = parent_vars @ vars;
     val all_named_vars = (parent_names ~~ parent_vars) @ named_vars;
 
-
-    val zeta = Name.variant alphas "'z";
-    val moreT = TFree (zeta, HOLogic.typeS);
+    val zeta = (Name.variant (map #1 alphas) "'z", HOLogic.typeS);
+    val moreT = TFree zeta;
     val more = Free (moreN, moreT);
     val full_moreN = full (Binding.name moreN);
     val bfields_more = bfields @ [(Binding.name moreN, moreT)];
@@ -1976,11 +1971,6 @@
 
     (* prepare definitions *)
 
-    (*record (scheme) type abbreviation*)
-    val recordT_specs =
-      [(Binding.suffix_name schemeN binding, alphas @ [zeta], rec_schemeT0, NoSyn),
-        (binding, alphas, recT0, NoSyn)];
-
     val ext_defs = ext_def :: map #ext_def parents;
 
     (*Theorems from the iso_tuple intros.
@@ -2064,7 +2054,9 @@
       ext_thy
       |> Sign.add_advanced_trfuns ([], [], advanced_print_translation, [])
       |> Sign.restore_naming thy
-      |> Sign.add_tyabbrs_i recordT_specs
+      |> Typedecl.abbrev_global (binding, map #1 alphas, NoSyn) recT0 |> snd
+      |> Typedecl.abbrev_global
+        (Binding.suffix_name schemeN binding, map #1 (alphas @ [zeta]), NoSyn) rec_schemeT0 |> snd
       |> Sign.qualified_path false binding
       |> fold (fn ((x, T), mx) => snd o Sign.declare_const ((Binding.name x, T), mx))
         (sel_decls ~~ (field_syntax @ [NoSyn]))
@@ -2349,7 +2341,7 @@
            ((Binding.name "iffs", iffs), [iff_add])];
 
     val info =
-      make_record_info args parent fields extension
+      make_record_info alphas parent fields extension
         ext_induct ext_inject ext_surjective ext_split ext_def
         sel_convs' upd_convs' sel_defs' upd_defs' fold_congs' unfold_congs' splits' derived_defs'
         surjective' equality' induct_scheme' induct' cases_scheme' cases' simps' iffs';
@@ -2371,10 +2363,25 @@
 
 (* add_record *)
 
-(*We do all preparations and error checks here, deferring the real
-  work to record_definition.*)
-fun gen_add_record prep_typ prep_raw_parent quiet_mode
-    (params, binding) raw_parent raw_fields thy =
+local
+
+fun read_parent NONE ctxt = (NONE, ctxt)
+  | read_parent (SOME raw_T) ctxt =
+      (case ProofContext.read_typ_abbrev ctxt raw_T of
+        Type (name, Ts) => (SOME (Ts, name), fold Variable.declare_typ Ts ctxt)
+      | T => error ("Bad parent record specification: " ^ Syntax.string_of_typ ctxt T));
+
+fun prep_field prep (x, T, mx) = (x, prep T, mx)
+  handle ERROR msg =>
+    cat_error msg ("The error(s) above occurred in record field " ^ quote (Binding.str_of x));
+
+fun read_field raw_field ctxt =
+  let val field as (_, T, _) = prep_field (Syntax.read_typ ctxt) raw_field
+  in (field, Variable.declare_typ T ctxt) end;
+
+in
+
+fun add_record quiet_mode (params, binding) raw_parent raw_fields thy =
   let
     val _ = Theory.requires thy "Record" "record definitions";
     val _ =
@@ -2382,40 +2389,19 @@
       else writeln ("Defining record " ^ quote (Binding.str_of binding) ^ " ...");
 
     val ctxt = ProofContext.init thy;
-
-
-    (* parents *)
-
-    fun prep_inst T = fst (cert_typ ctxt T []);
-
-    val parent = Option.map (apfst (map prep_inst) o prep_raw_parent ctxt) raw_parent
-      handle ERROR msg => cat_error msg ("The error(s) above in parent record specification");
+    fun cert_typ T = Type.no_tvars (ProofContext.cert_typ ctxt T)
+      handle TYPE (msg, _, _) => error msg;
+
+
+    (* specification *)
+
+    val parent = Option.map (apfst (map cert_typ)) raw_parent
+      handle ERROR msg =>
+        cat_error msg ("The error(s) above occurred in parent record specification");
+    val parent_args = (case parent of SOME (Ts, _) => Ts | NONE => []);
     val parents = add_parents thy parent [];
 
-    val init_env =
-      (case parent of
-        NONE => []
-      | SOME (types, _) => fold Term.add_tfreesT types []);
-
-
-    (* fields *)
-
-    fun prep_field (x, raw_T, mx) env =
-      let
-        val (T, env') =
-          prep_typ ctxt raw_T env handle ERROR msg =>
-            cat_error msg ("The error(s) above occured in record field " ^ quote (Binding.str_of x));
-      in ((x, T, mx), env') end;
-
-    val (bfields, envir) = fold_map prep_field raw_fields init_env;
-    val envir_names = map fst envir;
-
-
-    (* args *)
-
-    val defaultS = Sign.defaultS thy;
-    val args = map (fn x => (x, AList.lookup (op =) envir x |> the_default defaultS)) params;
-
+    val bfields = map (prep_field cert_typ) raw_fields;
 
     (* errors *)
 
@@ -2424,15 +2410,12 @@
       if is_none (get_record thy name) then []
       else ["Duplicate definition of record " ^ quote name];
 
-    val err_dup_parms =
-      (case duplicates (op =) params of
+    val spec_frees = fold Term.add_tfreesT (parent_args @ map #2 bfields) [];
+    val err_extra_frees =
+      (case subtract (op =) params spec_frees of
         [] => []
-      | dups => ["Duplicate parameter(s) " ^ commas dups]);
-
-    val err_extra_frees =
-      (case subtract (op =) params envir_names of
-        [] => []
-      | extras => ["Extra free type variable(s) " ^ commas extras]);
+      | extras => ["Extra free type variable(s) " ^
+          commas (map (Syntax.string_of_typ ctxt o TFree) extras)]);
 
     val err_no_fields = if null bfields then ["No fields present"] else [];
 
@@ -2445,23 +2428,25 @@
       if forall (not_equal moreN o Binding.name_of o #1) bfields then []
       else ["Illegal field name " ^ quote moreN];
 
-    val err_dup_sorts =
-      (case duplicates (op =) envir_names of
-        [] => []
-      | dups => ["Inconsistent sort constraints for " ^ commas dups]);
-
     val errs =
-      err_dup_record @ err_dup_parms @ err_extra_frees @ err_no_fields @
-      err_dup_fields @ err_bad_fields @ err_dup_sorts;
-
+      err_dup_record @ err_extra_frees @ err_no_fields @ err_dup_fields @ err_bad_fields;
     val _ = if null errs then () else error (cat_lines errs);
   in
-    thy |> record_definition (args, binding) parent parents bfields
+    thy |> record_definition (params, binding) parent parents bfields
   end
   handle ERROR msg => cat_error msg ("Failed to define record " ^ quote (Binding.str_of binding));
 
-val add_record = gen_add_record cert_typ (K I);
-val add_record_cmd = gen_add_record read_typ read_raw_parent;
+fun add_record_cmd quiet_mode (raw_params, binding) raw_parent raw_fields thy =
+  let
+    val ctxt = ProofContext.init thy;
+    val params = map (apsnd (Typedecl.read_constraint ctxt)) raw_params;
+    val ctxt1 = fold (Variable.declare_typ o TFree) params ctxt;
+    val (parent, ctxt2) = read_parent raw_parent ctxt1;
+    val (fields, ctxt3) = fold_map read_field raw_fields ctxt2;
+    val params' = map (ProofContext.check_tfree ctxt3) params;
+  in thy |> add_record quiet_mode (params', binding) parent fields end;
+
+end;
 
 
 (* setup theory *)
@@ -2479,7 +2464,7 @@
 
 val _ =
   OuterSyntax.command "record" "define extensible record" K.thy_decl
-    (P.type_args -- P.binding --
+    (P.type_args_constrained -- P.binding --
       (P.$$$ "=" |-- Scan.option (P.typ --| P.$$$ "+") -- Scan.repeat1 P.const_binding)
     >> (fn (x, (y, z)) => Toplevel.theory (add_record_cmd false x y z)));
 
--- a/src/HOL/Tools/refute.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/refute.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -717,8 +717,10 @@
       | Const (@{const_name Groups.times}, Type ("fun", [Type ("nat", []),
         Type ("fun", [Type ("nat", []), Type ("nat", [])])])) => t
       | Const (@{const_name List.append}, _) => t
+(* UNSOUND
       | Const (@{const_name lfp}, _) => t
       | Const (@{const_name gfp}, _) => t
+*)
       | Const (@{const_name fst}, _) => t
       | Const (@{const_name snd}, _) => t
       (* simply-typed lambda calculus *)
@@ -896,8 +898,10 @@
         Type ("fun", [Type ("nat", []), Type ("nat", [])])])) =>
           collect_type_axioms T axs
       | Const (@{const_name List.append}, T) => collect_type_axioms T axs
+(* UNSOUND
       | Const (@{const_name lfp}, T) => collect_type_axioms T axs
       | Const (@{const_name gfp}, T) => collect_type_axioms T axs
+*)
       | Const (@{const_name fst}, T) => collect_type_axioms T axs
       | Const (@{const_name snd}, T) => collect_type_axioms T axs
       (* simply-typed lambda calculus *)
@@ -2093,7 +2097,7 @@
           val HOLogic_prodT = HOLogic.mk_prodT (T1, T2)
           val HOLogic_setT  = HOLogic.mk_setT HOLogic_prodT
           (* Term.term *)
-          val HOLogic_empty_set = HOLogic.mk_set HOLogic_prodT []
+          val HOLogic_empty_set = Const (@{const_abbrev Set.empty}, HOLogic_setT)
           val HOLogic_insert    =
             Const (@{const_name insert}, HOLogic_prodT --> HOLogic_setT --> HOLogic_setT)
         in
@@ -2969,6 +2973,8 @@
     | _ =>
       NONE;
 
+(* UNSOUND
+
   (* theory -> model -> arguments -> Term.term ->
     (interpretation * model * arguments) option *)
 
@@ -3078,6 +3084,7 @@
       end
     | _ =>
       NONE;
+*)
 
   (* theory -> model -> arguments -> Term.term ->
     (interpretation * model * arguments) option *)
@@ -3163,7 +3170,7 @@
         val HOLogic_prodT = HOLogic.mk_prodT (T1, T2)
         val HOLogic_setT  = HOLogic.mk_setT HOLogic_prodT
         (* Term.term *)
-        val HOLogic_empty_set = HOLogic.mk_set HOLogic_prodT []
+        val HOLogic_empty_set = Const (@{const_abbrev Set.empty}, HOLogic_setT)
         val HOLogic_insert    =
           Const (@{const_name insert}, HOLogic_prodT --> HOLogic_setT --> HOLogic_setT)
       in
@@ -3313,8 +3320,10 @@
      add_interpreter "Nat_HOL.minus"      Nat_minus_interpreter #>
      add_interpreter "Nat_HOL.times"      Nat_times_interpreter #>
      add_interpreter "List.append" List_append_interpreter #>
+(* UNSOUND
      add_interpreter "lfp" lfp_interpreter #>
      add_interpreter "gfp" gfp_interpreter #>
+*)
      add_interpreter "fst" Product_Type_fst_interpreter #>
      add_interpreter "snd" Product_Type_snd_interpreter #>
      add_printer "stlc" stlc_printer #>
--- a/src/HOL/Tools/typecopy.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/typecopy.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -8,7 +8,7 @@
 sig
   type info = { vs: (string * sort) list, constr: string, typ: typ,
     inject: thm, proj: string * typ, proj_def: thm }
-  val typecopy: binding * string list -> typ -> (binding * binding) option
+  val typecopy: binding * (string * sort) list -> typ -> (binding * binding) option
     -> theory -> (string * info) * theory
   val get_info: theory -> string -> info option
   val interpretation: (string -> theory -> theory) -> theory -> theory
@@ -52,8 +52,8 @@
 fun typecopy (raw_tyco, raw_vs) raw_ty constr_proj thy =
   let
     val ty = Sign.certify_typ thy raw_ty;
-    val vs =
-      AList.make (the_default HOLogic.typeS o AList.lookup (op =) (Term.add_tfreesT ty [])) raw_vs;
+    val ctxt = ProofContext.init thy |> Variable.declare_typ ty;
+    val vs = map (ProofContext.check_tfree ctxt) raw_vs;
     val tac = Tactic.rtac UNIV_witness 1;
     fun add_info tyco (({ abs_type = ty_abs, rep_type = ty_rep, Abs_name = c_abs,
           Rep_name = c_rep, ...}, { Abs_inject = inject, Abs_inverse = inverse, ... })
@@ -80,8 +80,7 @@
         end
   in
     thy
-    |> Typedef.add_typedef_global false (SOME raw_tyco)
-      (raw_tyco, map (fn (v, _) => (v, dummyS)) vs, NoSyn)   (* FIXME keep constraints!? *)
+    |> Typedef.add_typedef_global false (SOME raw_tyco) (raw_tyco, vs, NoSyn)
       (HOLogic.mk_UNIV ty) (Option.map swap constr_proj) tac
     |-> (fn (tyco, info) => add_info tyco info)
   end;
--- a/src/HOL/Tools/typedef.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Tools/typedef.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -8,7 +8,7 @@
 signature TYPEDEF =
 sig
   type info =
-   {rep_type: typ, abs_type: typ, Rep_name: string, Abs_name: string} *
+   {rep_type: typ, abs_type: typ, Rep_name: string, Abs_name: string, axiom_name: string} *
    {inhabited: thm, type_definition: thm, set_def: thm option, Rep: thm, Rep_inverse: thm,
     Abs_inverse: thm, Rep_inject: thm, Abs_inject: thm, Rep_cases: thm, Abs_cases: thm,
     Rep_induct: thm, Abs_induct: thm}
@@ -36,7 +36,7 @@
 
 type info =
   (*global part*)
-  {rep_type: typ, abs_type: typ, Rep_name: string, Abs_name: string} *
+  {rep_type: typ, abs_type: typ, Rep_name: string, Abs_name: string, axiom_name: string} *
   (*local part*)
   {inhabited: thm, type_definition: thm, set_def: thm option, Rep: thm, Rep_inverse: thm,
     Abs_inverse: thm, Rep_inject: thm, Abs_inject: thm, Rep_cases: thm, Abs_cases: thm,
@@ -117,12 +117,12 @@
 
     val typedef_deps = Term.add_consts A [];
 
-    val (axiom, axiom_thy) = consts_thy
+    val ((axiom_name, axiom), axiom_thy) = consts_thy
       |> Thm.add_axiom (typedef_name, mk_typedef newT oldT RepC AbsC A)
       ||> Theory.add_deps "" (dest_Const RepC) typedef_deps
       ||> Theory.add_deps "" (dest_Const AbsC) typedef_deps;
 
-  in ((RepC, AbsC, axiom), axiom_thy) end;
+  in ((RepC, AbsC, axiom_name, axiom), axiom_thy) end;
 
 
 (* prepare_typedef *)
@@ -135,9 +135,9 @@
 
     (* rhs *)
 
-    val (_, tmp_lthy) = lthy |> Typedecl.predeclare_constraints (tname, raw_args, mx);
-    val set = prep_term tmp_lthy raw_set;
-    val tmp_lthy' = tmp_lthy |> Variable.declare_constraints set;
+    val tmp_ctxt = lthy |> fold (Variable.declare_typ o TFree) raw_args;
+    val set = prep_term tmp_ctxt raw_set;
+    val tmp_ctxt' = tmp_ctxt |> Variable.declare_term set;
 
     val setT = Term.fastype_of set;
     val oldT = HOLogic.dest_setT setT handle TYPE _ =>
@@ -149,7 +149,7 @@
 
     (* lhs *)
 
-    val args = map (fn (a, _) => (a, ProofContext.default_sort tmp_lthy' (a, ~1))) raw_args;
+    val args = map (ProofContext.check_tfree tmp_ctxt') raw_args;
     val (newT, typedecl_lthy) = lthy
       |> Typedecl.typedecl (tname, args, mx)
       ||> Variable.declare_term set;
@@ -177,20 +177,20 @@
 
     val typedef_name = Binding.prefix_name "type_definition_" name;
 
-    val ((RepC, AbsC, typedef), typedef_lthy) =
+    val ((RepC, AbsC, axiom_name, typedef), typedef_lthy) =
       let
         val thy = ProofContext.theory_of set_lthy;
         val cert = Thm.cterm_of thy;
         val (defs, A) =
           Local_Defs.export_cterm set_lthy (ProofContext.init thy) (cert set') ||> Thm.term_of;
 
-        val ((RepC, AbsC, axiom), axiom_lthy) = set_lthy |>
+        val ((RepC, AbsC, axiom_name, axiom), axiom_lthy) = set_lthy |>
           Local_Theory.theory_result (primitive_typedef typedef_name newT oldT Rep_name Abs_name A);
 
         val cert = Thm.cterm_of (ProofContext.theory_of axiom_lthy);
         val typedef =
           Local_Defs.contract axiom_lthy defs (cert (mk_typedef newT oldT RepC AbsC set')) axiom;
-      in ((RepC, AbsC, typedef), axiom_lthy) end;
+      in ((RepC, AbsC, axiom_name, typedef), axiom_lthy) end;
 
     val alias_lthy = typedef_lthy
       |> Local_Theory.const_alias Rep_name (#1 (Term.dest_Const RepC))
@@ -236,8 +236,8 @@
               make @{thm type_definition.Abs_induct});
 
         val info =
-          ({rep_type = oldT, abs_type = newT,
-            Rep_name = #1 (Term.dest_Const RepC), Abs_name = #1 (Term.dest_Const AbsC)},
+          ({rep_type = oldT, abs_type = newT, Rep_name = #1 (Term.dest_Const RepC),
+            Abs_name = #1 (Term.dest_Const AbsC), axiom_name = axiom_name},
            {inhabited = inhabited, type_definition = type_definition, set_def = set_def,
             Rep = Rep, Rep_inverse = Rep_inverse, Abs_inverse = Abs_inverse,
             Rep_inject = Rep_inject, Abs_inject = Abs_inject, Rep_cases = Rep_cases,
--- a/src/HOL/Typerep.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Typerep.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -90,6 +90,6 @@
 
 code_reserved Eval Term
 
-hide (open) const typerep Typerep
+hide_const (open) typerep Typerep
 
 end
--- a/src/HOL/Word/BinGeneral.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/Word/BinGeneral.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -28,7 +28,7 @@
 
 lemmas BIT_simps = BIT_B0_eq_Bit0 BIT_B1_eq_Bit1
 
-hide (open) const B0 B1
+hide_const (open) B0 B1
 
 lemma Min_ne_Pls [iff]:  
   "Int.Min ~= Int.Pls"
--- a/src/HOL/ex/Codegenerator_Candidates.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/ex/Codegenerator_Candidates.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -20,9 +20,8 @@
   "~~/src/HOL/Number_Theory/Primes"
   Product_ord
   "~~/src/HOL/ex/Records"
+  RBT
   SetsAndFunctions
-  Table
-  Tree
   While_Combinator
   Word
 begin
--- a/src/HOL/ex/Execute_Choice.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/ex/Execute_Choice.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -58,15 +58,15 @@
 
 text {*
   Given @{text valuesum_rec} as initial description, we stepwise refine it to something executable;
-  first, we formally insert the constructor @{term AList} and split the one equation into two,
+  first, we formally insert the constructor @{term Mapping} and split the one equation into two,
   where the second one provides the necessary context:
 *}
 
-lemma valuesum_rec_AList:
-  shows [code]: "valuesum (AList []) = 0"
-  and "valuesum (AList (x # xs)) = (let l = (SOME l. l \<in> Mapping.keys (AList (x # xs))) in
-    the (Mapping.lookup (AList (x # xs)) l) + valuesum (Mapping.delete l (AList (x # xs))))"
-  by (simp_all add: valuesum_rec finite_dom_map_of is_empty_AList)
+lemma valuesum_rec_Mapping:
+  shows [code]: "valuesum (Mapping []) = 0"
+  and "valuesum (Mapping (x # xs)) = (let l = (SOME l. l \<in> Mapping.keys (Mapping (x # xs))) in
+    the (Mapping.lookup (Mapping (x # xs)) l) + valuesum (Mapping.delete l (Mapping (x # xs))))"
+  by (simp_all add: valuesum_rec finite_dom_map_of is_empty_Mapping)
 
 text {*
   As a side effect the precondition disappears (but note this has nothing to do with choice!).
@@ -76,27 +76,27 @@
 *}
 
 lemma valuesum_rec_exec [code]:
-  "valuesum (AList (x # xs)) = (let l = fst (hd (x # xs)) in
-    the (Mapping.lookup (AList (x # xs)) l) + valuesum (Mapping.delete l (AList (x # xs))))"
+  "valuesum (Mapping (x # xs)) = (let l = fst (hd (x # xs)) in
+    the (Mapping.lookup (Mapping (x # xs)) l) + valuesum (Mapping.delete l (Mapping (x # xs))))"
 proof -
-  let ?M = "AList (x # xs)"
+  let ?M = "Mapping (x # xs)"
   let ?l1 = "(SOME l. l \<in> Mapping.keys ?M)"
   let ?l2 = "fst (hd (x # xs))"
-  have "finite (Mapping.keys ?M)" by (simp add: keys_AList)
+  have "finite (Mapping.keys ?M)" by (simp add: keys_Mapping)
   moreover have "?l1 \<in> Mapping.keys ?M"
-    by (rule someI) (auto simp add: keys_AList)
+    by (rule someI) (auto simp add: keys_Mapping)
   moreover have "?l2 \<in> Mapping.keys ?M"
-    by (simp add: keys_AList)
+    by (simp add: keys_Mapping)
   ultimately have "the (Mapping.lookup ?M ?l1) + valuesum (Mapping.delete ?l1 ?M) =
     the (Mapping.lookup ?M ?l2) + valuesum (Mapping.delete ?l2 ?M)"
     by (rule valuesum_choice)
-  then show ?thesis by (simp add: valuesum_rec_AList)
+  then show ?thesis by (simp add: valuesum_rec_Mapping)
 qed
   
 text {*
   See how it works:
 *}
 
-value "valuesum (AList [(''abc'', (42::int)), (''def'', 1705)])"
+value "valuesum (Mapping [(''abc'', (42::int)), (''def'', 1705)])"
 
 end
--- a/src/HOL/ex/Meson_Test.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/ex/Meson_Test.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -16,7 +16,7 @@
   below and constants declared in HOL!
 *}
 
-hide (open) const subset member quotient union inter
+hide_const (open) subset member quotient union inter
 
 text {*
   Test data for the MESON proof procedure
--- a/src/HOL/ex/Numeral.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/ex/Numeral.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -879,7 +879,7 @@
 declare zero_is_num_zero [code_unfold del]
 declare one_is_num_one [code_unfold del]
 
-hide (open) const sub dup
+hide_const (open) sub dup
 
 
 subsection {* Numeral equations as default simplification rules *}
--- a/src/HOL/ex/RPred.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/ex/RPred.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -47,6 +47,6 @@
 definition map :: "('a \<Rightarrow> 'b) \<Rightarrow> ('a rpred \<Rightarrow> 'b rpred)"
   where "map f P = bind P (return o f)"
 
-hide (open) const return bind supp map
+hide_const (open) return bind supp map
   
 end
\ No newline at end of file
--- a/src/HOL/ex/Refute_Examples.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/ex/Refute_Examples.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -1361,7 +1361,8 @@
 | "x : a_odd \<Longrightarrow> f x : a_even"
 
 lemma "x : a_odd"
-  refute  -- {* finds a model of size 2, as expected *}
+  (* refute  -- {* finds a model of size 2, as expected *}
+     NO LONGER WORKS since "lfp"'s interpreter is disabled *)
 oops
 
 (*****************************************************************************)
--- a/src/HOL/ex/SVC_Oracle.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOL/ex/SVC_Oracle.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -17,7 +17,7 @@
   iff_keep :: "[bool, bool] => bool"
   iff_unfold :: "[bool, bool] => bool"
 
-hide const iff_keep iff_unfold
+hide_const iff_keep iff_unfold
 
 oracle svc_oracle = Svc.oracle
 
--- a/src/HOLCF/Domain.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOLCF/Domain.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -149,8 +149,8 @@
   cfcomp2 sfst_defined_iff ssnd_defined_iff
 
 lemmas take_con_rules =
-  ID1 ssum_map_sinl' ssum_map_sinr' ssum_map_strict
-  sprod_map_spair' sprod_map_strict u_map_up u_map_strict
+  ssum_map_sinl' ssum_map_sinr' sprod_map_spair' u_map_up
+  deflation_strict deflation_ID ID1 cfcomp2
 
 use "Tools/cont_consts.ML"
 use "Tools/cont_proc.ML"
--- a/src/HOLCF/Fixrec.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOLCF/Fixrec.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -608,7 +608,7 @@
       (@{const_name UU}, @{const_name match_UU}) ]
 *}
 
-hide (open) const return bind fail run cases
+hide_const (open) return bind fail run cases
 
 lemmas [fixrec_simp] =
   run_strict run_fail run_return
--- a/src/HOLCF/IOA/meta_theory/Automata.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOLCF/IOA/meta_theory/Automata.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -59,7 +59,7 @@
 
   (* hiding and restricting *)
   hide_asig     :: "['a signature, 'a set] => 'a signature"
-  "hide"        :: "[('a,'s)ioa, 'a set] => ('a,'s)ioa"
+  hide          :: "[('a,'s)ioa, 'a set] => ('a,'s)ioa"
   restrict_asig :: "['a signature, 'a set] => 'a signature"
   restrict      :: "[('a,'s)ioa, 'a set] => ('a,'s)ioa"
 
--- a/src/HOLCF/Tools/Domain/domain_axioms.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOLCF/Tools/Domain/domain_axioms.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -15,7 +15,7 @@
       binding * term -> theory -> thm * theory
 
   val add_axioms :
-      (binding * (typ * typ)) list -> theory ->
+      (binding * mixfix * (typ * typ)) list -> theory ->
       (Domain_Take_Proofs.iso_info list
        * Domain_Take_Proofs.take_induct_info) * theory
 end;
@@ -85,37 +85,51 @@
 
     val (lub_take_thm, thy) = Specification.axiom ((lub_take_bind, []), lub_take_eqn) thy;
   in
-    (Drule.export_without_context lub_take_thm, thy)
+    (lub_take_thm, thy)
   end;
 
 fun add_axioms
-    (dom_eqns : (binding * (typ * typ)) list)
+    (dom_eqns : (binding * mixfix * (typ * typ)) list)
     (thy : theory) =
   let
 
+    val dbinds = map #1 dom_eqns;
+
+    (* declare new types *)
+    fun thy_type (dbind, mx, (lhsT, _)) =
+        (dbind, (length o snd o dest_Type) lhsT, mx);
+    val thy = Sign.add_types (map thy_type dom_eqns) thy;
+
+    (* axiomatize type constructor arities *)
+    fun thy_arity (_, _, (lhsT, _)) =
+        let val (dname, tvars) = dest_Type lhsT;
+        in (dname, map (snd o dest_TFree) tvars, @{sort pcpo}) end;
+    val thy = fold (AxClass.axiomatize_arity o thy_arity) dom_eqns thy;
+
     (* declare and axiomatize abs/rep *)
     val (iso_infos, thy) =
-        fold_map axiomatize_isomorphism dom_eqns thy;
+        fold_map axiomatize_isomorphism
+          (map (fn (dbind, _, eqn) => (dbind, eqn)) dom_eqns) thy;
 
     (* define take functions *)
     val (take_info, thy) =
         Domain_Take_Proofs.define_take_functions
-          (map fst dom_eqns ~~ iso_infos) thy;
+          (dbinds ~~ iso_infos) thy;
 
     (* declare lub_take axioms *)
     val (lub_take_thms, thy) =
         fold_map axiomatize_lub_take
-          (map fst dom_eqns ~~ #take_consts take_info) thy;
+          (dbinds ~~ #take_consts take_info) thy;
 
     (* prove additional take theorems *)
     val (take_info2, thy) =
         Domain_Take_Proofs.add_lub_take_theorems
-          (map fst dom_eqns ~~ iso_infos) take_info lub_take_thms thy;
+          (dbinds ~~ iso_infos) take_info lub_take_thms thy;
 
     (* define map functions *)
     val (map_info, thy) =
         Domain_Isomorphism.define_map_functions
-          (map fst dom_eqns ~~ iso_infos) thy;
+          (dbinds ~~ iso_infos) thy;
 
   in
     ((iso_infos, take_info2), thy)
--- a/src/HOLCF/Tools/Domain/domain_constructors.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOLCF/Tools/Domain/domain_constructors.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -200,7 +200,7 @@
           val conj = foldr1 mk_conj (eqn :: map mk_defined nonlazy);
         in Library.foldr mk_ex (vs, conj) end;
       val goal = mk_trp (foldr1 mk_disj (mk_undef y :: map one_con spec'));
-      (* first 3 rules replace "y = UU \/ P" with "rep$y = UU \/ P" *)
+      (* first rules replace "y = UU \/ P" with "rep$y = UU \/ P" *)
       val tacs = [
           rtac (iso_locale RS @{thm iso.casedist_rule}) 1,
           rewrite_goals_tac [mk_meta_eq (iso_locale RS @{thm iso.iso_swap})],
@@ -210,7 +210,7 @@
       val exhaust =
           (nchotomy RS @{thm exh_casedist0})
           |> rewrite_rule @{thms exh_casedists}
-          |> Drule.export_without_context;
+          |> Drule.zero_var_indexes;
     end;
 
     (* prove compactness rules for constructors *)
--- a/src/HOLCF/Tools/Domain/domain_extender.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOLCF/Tools/Domain/domain_extender.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -39,7 +39,6 @@
 
 (* ----- general testing and preprocessing of constructor list -------------- *)
 fun check_and_sort_domain
-    (definitional : bool)
     (dtnvs : (string * typ list) list)
     (cons'' : (binding * (bool * binding option * typ) list * mixfix) list list)
     (thy : theory)
@@ -80,9 +79,6 @@
           | rm_sorts (Type(s,ts)) = Type(s,remove_sorts ts)
           | rm_sorts (TVar(s,_))  = TVar(s,[])
         and remove_sorts l = map rm_sorts l;
-        val indirect_ok =
-            [@{type_name "*"}, @{type_name cfun}, @{type_name ssum},
-             @{type_name sprod}, @{type_name u}];
         fun analyse indirect (TFree(v,s))  =
             (case AList.lookup (op =) tvars v of 
                NONE => error ("Free type variable " ^ quote v ^ " on rhs.")
@@ -93,10 +89,7 @@
                                         " for type variable " ^ quote v))
           | analyse indirect (t as Type(s,typl)) =
             (case AList.lookup (op =) dtnvs s of
-               NONE =>
-                 if definitional orelse s mem indirect_ok
-                 then Type(s,map (analyse false) typl)
-                 else Type(s,map (analyse true) typl)
+               NONE => Type (s, map (analyse false) typl)
              | SOME typevars =>
                  if indirect 
                  then error ("Indirect recursion of type " ^ 
@@ -125,84 +118,12 @@
 
 (* ----- calls for building new thy and thms -------------------------------- *)
 
+type info =
+     Domain_Take_Proofs.iso_info list * Domain_Take_Proofs.take_induct_info;
+
 fun gen_add_domain
     (prep_typ : theory -> 'a -> typ)
-    (comp_dbind : binding)
-    (eqs''' : ((string * string option) list * binding * mixfix *
-               (binding * (bool * binding option * 'a) list * mixfix) list) list)
-    (thy : theory) =
-  let
-    val dtnvs : (binding * typ list * mixfix) list =
-      let
-        fun readS (SOME s) = Syntax.read_sort_global thy s
-          | readS NONE = Sign.defaultS thy;
-        fun readTFree (a, s) = TFree (a, readS s);
-      in
-        map (fn (vs,dname:binding,mx,_) =>
-                (dname, map readTFree vs, mx)) eqs'''
-      end;
-
-    (* declare new types *)
-    val thy =
-      let
-        fun thy_type  (dname,tvars,mx) = (dname, length tvars, mx);
-        fun thy_arity (dname,tvars,mx) =
-            (Sign.full_name thy dname, map (snd o dest_TFree) tvars, pcpoS);
-      in
-        thy
-          |> Sign.add_types (map thy_type dtnvs)
-          |> fold (AxClass.axiomatize_arity o thy_arity) dtnvs
-      end;
-
-    val dbinds : binding list =
-        map (fn (_,dbind,_,_) => dbind) eqs''';
-    val cons''' :
-        (binding * (bool * binding option * 'a) list * mixfix) list list =
-        map (fn (_,_,_,cons) => cons) eqs''';
-    val cons'' :
-        (binding * (bool * binding option * typ) list * mixfix) list list =
-        map (map (upd_second (map (upd_third (prep_typ thy))))) cons''';
-    val dtnvs' : (string * typ list) list =
-      map (fn (dname,vs,mx) => (Sign.full_name thy dname,vs)) dtnvs;
-    val eqs' : ((string * typ list) *
-        (binding * (bool * binding option * typ) list * mixfix) list) list =
-        check_and_sort_domain false dtnvs' cons'' thy;
-    val dts : typ list = map (Type o fst) eqs';
-    val new_dts : (string * string list) list =
-        map (fn ((s,Ts),_) => (s, map (fst o dest_TFree) Ts)) eqs';
-    fun one_con (con,args,mx) : cons =
-        (Binding.name_of con,  (* FIXME preverse binding (!?) *)
-         ListPair.map (fn ((lazy,sel,tp),vn) =>
-           mk_arg ((lazy, Datatype_Aux.dtyp_of_typ new_dts tp), vn))
-                      (args, Datatype_Prop.make_tnames (map third args)));
-    val eqs : eq list =
-        map (fn (dtnvs,cons') => (dtnvs, map one_con cons')) eqs';
-
-    fun mk_arg_typ (lazy, dest_opt, T) = if lazy then mk_uT T else T;
-    fun mk_con_typ (bind, args, mx) =
-        if null args then oneT else foldr1 mk_sprodT (map mk_arg_typ args);
-    fun mk_eq_typ (_, cons) = foldr1 mk_ssumT (map mk_con_typ cons);
-    val repTs : typ list = map mk_eq_typ eqs';
-    val dom_eqns : (binding * (typ * typ)) list = dbinds ~~ (dts ~~ repTs);
-    val ((iso_infos, take_info), thy) =
-        Domain_Axioms.add_axioms dom_eqns thy;
-
-    val ((rewss, take_rews), theorems_thy) =
-        thy
-          |> fold_map (fn (((dbind, eq), (_,cs)), info) =>
-                Domain_Theorems.theorems (eq, eqs) dbind cs info take_info)
-             (dbinds ~~ eqs ~~ eqs' ~~ iso_infos)
-          ||>> Domain_Theorems.comp_theorems (comp_dbind, eqs) take_info;
-  in
-    theorems_thy
-      |> PureThy.add_thmss
-           [((Binding.qualified true "rews" comp_dbind,
-              flat rewss @ take_rews), [])]
-      |> snd
-  end;
-
-fun gen_add_new_domain
-    (prep_typ : theory -> 'a -> typ)
+    (add_isos : (binding * mixfix * (typ * typ)) list -> theory -> info * theory)
     (comp_dbind : binding)
     (eqs''' : ((string * string option) list * binding * mixfix *
                (binding * (bool * binding option * 'a) list * mixfix) list) list)
@@ -240,34 +161,35 @@
         map (fn (dname,vs,mx) => (Sign.full_name thy dname,vs)) dtnvs;
     val eqs' : ((string * typ list) *
         (binding * (bool * binding option * typ) list * mixfix) list) list =
-        check_and_sort_domain true dtnvs' cons'' tmp_thy;
+        check_and_sort_domain dtnvs' cons'' tmp_thy;
+    val dts : typ list = map (Type o fst) eqs';
 
     fun mk_arg_typ (lazy, dest_opt, T) = if lazy then mk_uT T else T;
     fun mk_con_typ (bind, args, mx) =
         if null args then oneT else foldr1 mk_sprodT (map mk_arg_typ args);
     fun mk_eq_typ (_, cons) = foldr1 mk_ssumT (map mk_con_typ cons);
-    
-    val ((iso_infos, take_info), thy) = thy |>
-      Domain_Isomorphism.domain_isomorphism
-        (map (fn ((vs, dname, mx, _), eq) =>
-                 (map fst vs, dname, mx, mk_eq_typ eq, NONE))
-             (eqs''' ~~ eqs'))
+    val repTs : typ list = map mk_eq_typ eqs';
 
-    val dts : typ list = map (Type o fst) eqs';
+    val iso_spec : (binding * mixfix * (typ * typ)) list =
+        map (fn ((dbind, _, mx), eq) => (dbind, mx, eq))
+          (dtnvs ~~ (dts ~~ repTs));
+
+    val ((iso_infos, take_info), thy) = add_isos iso_spec thy;
+
     val new_dts : (string * string list) list =
         map (fn ((s,Ts),_) => (s, map (fst o dest_TFree) Ts)) eqs';
     fun one_con (con,args,mx) : cons =
-        (Binding.name_of con,   (* FIXME preverse binding (!?) *)
+        (Binding.name_of con,  (* FIXME preverse binding (!?) *)
          ListPair.map (fn ((lazy,sel,tp),vn) =>
            mk_arg ((lazy, Datatype_Aux.dtyp_of_typ new_dts tp), vn))
-                      (args, Datatype_Prop.make_tnames (map third args))
-        );
+                      (args, Datatype_Prop.make_tnames (map third args)));
     val eqs : eq list =
         map (fn (dtnvs,cons') => (dtnvs, map one_con cons')) eqs';
+
     val ((rewss, take_rews), theorems_thy) =
         thy
-          |> fold_map (fn (((dbind, eq), (x,cs)), info) =>
-               Domain_Theorems.theorems (eq, eqs) dbind cs info take_info)
+          |> fold_map (fn (((dbind, eq), (_,cs)), info) =>
+                Domain_Theorems.theorems (eq, eqs) dbind cs info take_info)
              (dbinds ~~ eqs ~~ eqs' ~~ iso_infos)
           ||>> Domain_Theorems.comp_theorems (comp_dbind, eqs) take_info;
   in
@@ -278,11 +200,26 @@
       |> snd
   end;
 
-val add_domain = gen_add_domain Sign.certify_typ;
-val add_domain_cmd = gen_add_domain Syntax.read_typ_global;
+fun define_isos (spec : (binding * mixfix * (typ * typ)) list) =
+  let
+    fun prep (dbind, mx, (lhsT, rhsT)) =
+      let val (dname, vs) = dest_Type lhsT;
+      in (map (fst o dest_TFree) vs, dbind, mx, rhsT, NONE) end;
+  in
+    Domain_Isomorphism.domain_isomorphism (map prep spec)
+  end;
 
-val add_new_domain = gen_add_new_domain Sign.certify_typ;
-val add_new_domain_cmd = gen_add_new_domain Syntax.read_typ_global;
+val add_domain =
+    gen_add_domain Sign.certify_typ Domain_Axioms.add_axioms;
+
+val add_new_domain =
+    gen_add_domain Sign.certify_typ define_isos;
+
+val add_domain_cmd =
+    gen_add_domain Syntax.read_typ_global Domain_Axioms.add_axioms;
+
+val add_new_domain_cmd =
+    gen_add_domain Syntax.read_typ_global define_isos;
 
 
 (** outer syntax **)
--- a/src/HOLCF/Tools/Domain/domain_isomorphism.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOLCF/Tools/Domain/domain_isomorphism.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -136,7 +136,7 @@
     val rep_iso = #rep_inverse info;
     val thm = @{thm deflation_abs_rep} OF [abs_iso, rep_iso];
   in
-    Drule.export_without_context thm
+    Drule.zero_var_indexes thm
   end
 
 (******************************************************************************)
@@ -201,7 +201,7 @@
 
     (* register unfold theorems *)
     val (unfold_thms, thy) =
-      (PureThy.add_thms o map (Thm.no_attributes o apsnd Drule.export_without_context))
+      (PureThy.add_thms o map (Thm.no_attributes o apsnd Drule.zero_var_indexes))
         (mk_unfold_thms unfold_binds tuple_unfold_thm) thy;
   in
     ((proj_thms, unfold_thms), thy)
@@ -349,7 +349,7 @@
     val deflation_map_binds = dbinds |>
         map (Binding.prefix_name "deflation_" o Binding.suffix_name "_map");
     val (deflation_map_thms, thy) = thy |>
-      (PureThy.add_thms o map (Thm.no_attributes o apsnd Drule.export_without_context))
+      (PureThy.add_thms o map (Thm.no_attributes o apsnd Drule.zero_var_indexes))
         (conjuncts deflation_map_binds deflation_map_thm);
 
     (* register map functions in theory data *)
@@ -519,7 +519,7 @@
     fun mk_iso_thms ((tbind, REP_eq), (rep_def, abs_def)) thy =
       let
         fun make thm =
-            Drule.export_without_context (thm OF [REP_eq, abs_def, rep_def]);
+            Drule.zero_var_indexes (thm OF [REP_eq, abs_def, rep_def]);
         val rep_iso_thm = make @{thm domain_rep_iso};
         val abs_iso_thm = make @{thm domain_abs_iso};
         val isodefl_thm = make @{thm isodefl_abs_rep};
@@ -607,7 +607,7 @@
           val thmR = thm RS @{thm conjunct2};
         in (n, thmL):: conjuncts ns thmR end;
     val (isodefl_thms, thy) = thy |>
-      (PureThy.add_thms o map (Thm.no_attributes o apsnd Drule.export_without_context))
+      (PureThy.add_thms o map (Thm.no_attributes o apsnd Drule.zero_var_indexes))
         (conjuncts isodefl_binds isodefl_thm);
     val thy = IsodeflData.map (fold Thm.add_thm isodefl_thms) thy;
 
--- a/src/HOLCF/Tools/Domain/domain_take_proofs.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOLCF/Tools/Domain/domain_take_proofs.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -167,7 +167,7 @@
     val rep_iso = #rep_inverse info;
     val thm = @{thm deflation_abs_rep} OF [abs_iso, rep_iso];
   in
-    Drule.export_without_context thm
+    Drule.zero_var_indexes thm
   end
 
 (******************************************************************************)
@@ -359,15 +359,15 @@
         in (n, thmL):: conjuncts ns thmR end;
     val (deflation_take_thms, thy) =
       fold_map (add_qualified_thm "deflation_take")
-        (map (apsnd Drule.export_without_context)
+        (map (apsnd Drule.zero_var_indexes)
           (conjuncts dbinds deflation_take_thm)) thy;
 
     (* prove strictness of take functions *)
     fun prove_take_strict (deflation_take, dbind) thy =
       let
         val take_strict_thm =
-            Drule.export_without_context
-            (@{thm deflation_strict} OF [deflation_take]);
+            Drule.zero_var_indexes
+              (@{thm deflation_strict} OF [deflation_take]);
       in
         add_qualified_thm "take_strict" (dbind, take_strict_thm) thy
       end;
@@ -379,8 +379,8 @@
     fun prove_take_take ((chain_take, deflation_take), dbind) thy =
       let
         val take_take_thm =
-            Drule.export_without_context
-            (@{thm deflation_chain_min} OF [chain_take, deflation_take]);
+            Drule.zero_var_indexes
+              (@{thm deflation_chain_min} OF [chain_take, deflation_take]);
       in
         add_qualified_thm "take_take" (dbind, take_take_thm) thy
       end;
@@ -392,8 +392,8 @@
     fun prove_take_below (deflation_take, dbind) thy =
       let
         val take_below_thm =
-            Drule.export_without_context
-            (@{thm deflation.below} OF [deflation_take]);
+            Drule.zero_var_indexes
+              (@{thm deflation.below} OF [deflation_take]);
       in
         add_qualified_thm "take_below" (dbind, take_below_thm) thy
       end;
@@ -542,7 +542,7 @@
     fun prove_reach_lemma ((chain_take, lub_take), dbind) thy =
       let
         val thm =
-            Drule.export_without_context
+            Drule.zero_var_indexes
               (@{thm lub_ID_reach} OF [chain_take, lub_take]);
       in
         add_qualified_thm "reach" (dbind, thm) thy
@@ -575,7 +575,7 @@
       else
         let
           fun prove_take_induct (chain_take, lub_take) =
-              Drule.export_without_context
+              Drule.zero_var_indexes
                 (@{thm lub_ID_take_induct} OF [chain_take, lub_take]);
           val take_inducts =
               map prove_take_induct (chain_take_thms ~~ lub_take_thms);
--- a/src/HOLCF/Tools/Domain/domain_theorems.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOLCF/Tools/Domain/domain_theorems.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -184,8 +184,7 @@
       val rhs = con_app2 con one_rhs args;
       val goal = mk_trp (lhs === rhs);
       val rules =
-          [ax_abs_iso]
-          @ @{thms take_con_rules ID1 cfcomp2 deflation_strict}
+          [ax_abs_iso] @ @{thms take_con_rules}
           @ take_Suc_thms @ deflation_thms @ deflation_take_thms;
       val tacs = [simp_tac (HOL_basic_ss addsimps rules) 1];
     in pg con_appls goal (K tacs) end;
--- a/src/HOLCF/Tools/pcpodef.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOLCF/Tools/pcpodef.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -91,7 +91,7 @@
     val thy = AxClass.prove_arity (full_tname, lhs_sorts, @{sort cpo}) tac thy;
     (* transfer thms so that they will know about the new cpo instance *)
     val cpo_thms' = map (Thm.transfer thy) cpo_thms;
-    fun make thm = Drule.export_without_context (thm OF cpo_thms');
+    fun make thm = Drule.zero_var_indexes (thm OF cpo_thms');
     val cont_Rep = make @{thm typedef_cont_Rep};
     val cont_Abs = make @{thm typedef_cont_Abs};
     val lub = make @{thm typedef_lub};
@@ -133,7 +133,7 @@
     val tac = Tactic.rtac (@{thm typedef_pcpo} OF pcpo_thms) 1;
     val thy = AxClass.prove_arity (full_tname, lhs_sorts, @{sort pcpo}) tac thy;
     val pcpo_thms' = map (Thm.transfer thy) pcpo_thms;
-    fun make thm = Drule.export_without_context (thm OF pcpo_thms');
+    fun make thm = Drule.zero_var_indexes (thm OF pcpo_thms');
     val Rep_strict = make @{thm typedef_Rep_strict};
     val Abs_strict = make @{thm typedef_Abs_strict};
     val Rep_strict_iff = make @{thm typedef_Rep_strict_iff};
@@ -169,18 +169,18 @@
     val _ = Theory.requires thy "Pcpodef" "pcpodefs";
 
     (*rhs*)
-    val (_, tmp_lthy) =
-      thy |> Theory.copy |> Theory_Target.init NONE
-      |> Typedecl.predeclare_constraints (tname, raw_args, mx);
-    val set = prep_term tmp_lthy raw_set;
-    val tmp_lthy' = tmp_lthy |> Variable.declare_constraints set;
+    val tmp_ctxt =
+      ProofContext.init thy
+      |> fold (Variable.declare_typ o TFree) raw_args;
+    val set = prep_term tmp_ctxt raw_set;
+    val tmp_ctxt' = tmp_ctxt |> Variable.declare_term set;
 
     val setT = Term.fastype_of set;
     val oldT = HOLogic.dest_setT setT handle TYPE _ =>
-      error ("Not a set type: " ^ quote (Syntax.string_of_typ tmp_lthy setT));
+      error ("Not a set type: " ^ quote (Syntax.string_of_typ tmp_ctxt setT));
 
     (*lhs*)
-    val lhs_tfrees = map (fn (a, _) => (a, ProofContext.default_sort tmp_lthy' (a, ~1))) raw_args;
+    val lhs_tfrees = map (ProofContext.check_tfree tmp_ctxt') raw_args;
     val full_tname = Sign.full_name thy tname;
     val newT = Type (full_tname, map TFree lhs_tfrees);
 
--- a/src/HOLCF/Tools/repdef.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOLCF/Tools/repdef.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -64,18 +64,18 @@
     val _ = Theory.requires thy "Representable" "repdefs";
 
     (*rhs*)
-    val (_, tmp_lthy) =
-      thy |> Theory.copy |> Theory_Target.init NONE
-      |> Typedecl.predeclare_constraints (tname, raw_args, mx);
-    val defl = prep_term tmp_lthy raw_defl;
-    val tmp_lthy = tmp_lthy |> Variable.declare_constraints defl;
+    val tmp_ctxt =
+      ProofContext.init thy
+      |> fold (Variable.declare_typ o TFree) raw_args;
+    val defl = prep_term tmp_ctxt raw_defl;
+    val tmp_ctxt = tmp_ctxt |> Variable.declare_constraints defl;
 
     val deflT = Term.fastype_of defl;
     val _ = if deflT = @{typ "udom alg_defl"} then ()
-            else error ("Not type udom alg_defl: " ^ quote (Syntax.string_of_typ tmp_lthy deflT));
+            else error ("Not type udom alg_defl: " ^ quote (Syntax.string_of_typ tmp_ctxt deflT));
 
     (*lhs*)
-    val lhs_tfrees = map (fn (a, _) => (a, ProofContext.default_sort tmp_lthy (a, ~1))) raw_args;
+    val lhs_tfrees = map (ProofContext.check_tfree tmp_ctxt) raw_args;
     val lhs_sorts = map snd lhs_tfrees;
     val full_tname = Sign.full_name thy tname;
     val newT = Type (full_tname, map TFree lhs_tfrees);
@@ -141,7 +141,7 @@
       |> Sign.add_path (Binding.name_of name)
       |> PureThy.add_thm
          ((Binding.prefix_name "REP_" name,
-          Drule.export_without_context (@{thm typedef_REP} OF typedef_thms')), [])
+          Drule.zero_var_indexes (@{thm typedef_REP} OF typedef_thms')), [])
       ||> Sign.restore_naming thy;
 
     val rep_info =
--- a/src/HOLCF/Universal.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOLCF/Universal.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -809,7 +809,7 @@
  apply (rule ubasis_until_less)
 done
 
-hide (open) const
+hide_const (open)
   node
   choose
   choose_pos
--- a/src/HOLCF/ex/Domain_Proofs.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOLCF/ex/Domain_Proofs.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -55,20 +55,23 @@
 definition baz_defl :: "TypeRep \<rightarrow> TypeRep"
 where "baz_defl = (\<Lambda> a. snd (snd (fix\<cdot>(foo_bar_baz_deflF\<cdot>a))))"
 
+lemma defl_apply_thms:
+  "foo_defl\<cdot>a = fst (fix\<cdot>(foo_bar_baz_deflF\<cdot>a))"
+  "bar_defl\<cdot>a = fst (snd (fix\<cdot>(foo_bar_baz_deflF\<cdot>a)))"
+  "baz_defl\<cdot>a = snd (snd (fix\<cdot>(foo_bar_baz_deflF\<cdot>a)))"
+unfolding foo_defl_def bar_defl_def baz_defl_def by simp_all
+
 text {* Unfold rules for each combinator. *}
 
 lemma foo_defl_unfold:
   "foo_defl\<cdot>a = ssum_defl\<cdot>REP(one)\<cdot>(sprod_defl\<cdot>(u_defl\<cdot>a)\<cdot>(u_defl\<cdot>(bar_defl\<cdot>a)))"
-unfolding foo_defl_def bar_defl_def baz_defl_def
-by (subst fix_eq, simp add: foo_bar_baz_deflF_beta)
+unfolding defl_apply_thms by (subst fix_eq, simp add: foo_bar_baz_deflF_beta)
 
 lemma bar_defl_unfold: "bar_defl\<cdot>a = u_defl\<cdot>(cfun_defl\<cdot>(baz_defl\<cdot>a)\<cdot>REP(tr))"
-unfolding foo_defl_def bar_defl_def baz_defl_def
-by (subst fix_eq, simp add: foo_bar_baz_deflF_beta)
+unfolding defl_apply_thms by (subst fix_eq, simp add: foo_bar_baz_deflF_beta)
 
 lemma baz_defl_unfold: "baz_defl\<cdot>a = u_defl\<cdot>(cfun_defl\<cdot>(convex_defl\<cdot>(foo_defl\<cdot>a))\<cdot>REP(tr))"
-unfolding foo_defl_def bar_defl_def baz_defl_def
-by (subst fix_eq, simp add: foo_bar_baz_deflF_beta)
+unfolding defl_apply_thms by (subst fix_eq, simp add: foo_bar_baz_deflF_beta)
 
 text "The automation for the previous steps will be quite similar to
 how the fixrec package works."
@@ -308,6 +311,12 @@
 definition baz_map :: "('a \<rightarrow> 'b) \<rightarrow> ('b baz \<rightarrow> 'a baz)"
 where "baz_map = (\<Lambda> f. snd (snd (fix\<cdot>(foo_bar_baz_mapF\<cdot>f))))"
 
+lemma map_apply_thms:
+  "foo_map\<cdot>f = fst (fix\<cdot>(foo_bar_baz_mapF\<cdot>f))"
+  "bar_map\<cdot>f = fst (snd (fix\<cdot>(foo_bar_baz_mapF\<cdot>f)))"
+  "baz_map\<cdot>f = snd (snd (fix\<cdot>(foo_bar_baz_mapF\<cdot>f)))"
+unfolding foo_map_def bar_map_def baz_map_def by simp_all
+
 text {* Prove isodefl rules for all map functions simultaneously. *}
 
 lemma isodefl_foo_bar_baz:
@@ -316,8 +325,7 @@
   "isodefl (foo_map\<cdot>d) (foo_defl\<cdot>t) \<and>
   isodefl (bar_map\<cdot>d) (bar_defl\<cdot>t) \<and>
   isodefl (baz_map\<cdot>d) (baz_defl\<cdot>t)"
- apply (simp add: foo_map_def bar_map_def baz_map_def)
- apply (simp add: foo_defl_def bar_defl_def baz_defl_def)
+unfolding map_apply_thms defl_apply_thms
  apply (rule parallel_fix_ind)
    apply (intro adm_conj adm_isodefl cont2cont_fst cont2cont_snd cont_id)
   apply (simp only: fst_strict snd_strict isodefl_bottom simp_thms)
@@ -366,65 +374,100 @@
 
 (********************************************************************)
 
-subsection {* Step 5: Define copy functions, prove reach lemmas *}
-
-text {* Define copy functions just like the old domain package does. *}
-
-definition
-  foo_copy ::
-    "('a foo \<rightarrow> 'a foo) \<times> ('a bar \<rightarrow> 'a bar) \<times> ('a baz \<rightarrow> 'a baz) \<rightarrow>
-       'a foo \<rightarrow> 'a foo"
-where
-  "foo_copy = (\<Lambda> p. foo_abs oo
-        ssum_map\<cdot>ID\<cdot>(sprod_map\<cdot>(u_map\<cdot>ID)\<cdot>(u_map\<cdot>(fst (snd p))))
-          oo foo_rep)"
+subsection {* Step 5: Define take functions, prove lub-take lemmas *}
 
 definition
-  bar_copy ::
-    "('a foo \<rightarrow> 'a foo) \<times> ('a bar \<rightarrow> 'a bar) \<times> ('a baz \<rightarrow> 'a baz) \<rightarrow>
-       'a bar \<rightarrow> 'a bar"
-where
-  "bar_copy = (\<Lambda> p. bar_abs oo
-        u_map\<cdot>(cfun_map\<cdot>(snd (snd p))\<cdot>ID) oo bar_rep)"
-
-definition
-  baz_copy ::
-    "('a foo \<rightarrow> 'a foo) \<times> ('a bar \<rightarrow> 'a bar) \<times> ('a baz \<rightarrow> 'a baz) \<rightarrow>
-       'a baz \<rightarrow> 'a baz"
-where
-  "baz_copy = (\<Lambda> p. baz_abs oo
-        u_map\<cdot>(cfun_map\<cdot>(convex_map\<cdot>(fst p))\<cdot>ID) oo baz_rep)"
-
-definition
-  foo_bar_baz_copy ::
+  foo_bar_baz_takeF ::
     "('a foo \<rightarrow> 'a foo) \<times> ('a bar \<rightarrow> 'a bar) \<times> ('a baz \<rightarrow> 'a baz) \<rightarrow>
      ('a foo \<rightarrow> 'a foo) \<times> ('a bar \<rightarrow> 'a bar) \<times> ('a baz \<rightarrow> 'a baz)"
 where
-  "foo_bar_baz_copy = (\<Lambda> f. (foo_copy\<cdot>f, bar_copy\<cdot>f, baz_copy\<cdot>f))"
+  "foo_bar_baz_takeF = (\<Lambda> p.
+    ( foo_abs oo
+        ssum_map\<cdot>ID\<cdot>(sprod_map\<cdot>(u_map\<cdot>ID)\<cdot>(u_map\<cdot>(fst (snd p))))
+          oo foo_rep
+    , bar_abs oo
+        u_map\<cdot>(cfun_map\<cdot>(snd (snd p))\<cdot>ID) oo bar_rep
+    , baz_abs oo
+        u_map\<cdot>(cfun_map\<cdot>(convex_map\<cdot>(fst p))\<cdot>ID) oo baz_rep
+    ))"
+
+lemma foo_bar_baz_takeF_beta:
+  "foo_bar_baz_takeF\<cdot>p =
+    ( foo_abs oo
+        ssum_map\<cdot>ID\<cdot>(sprod_map\<cdot>(u_map\<cdot>ID)\<cdot>(u_map\<cdot>(fst (snd p))))
+          oo foo_rep
+    , bar_abs oo
+        u_map\<cdot>(cfun_map\<cdot>(snd (snd p))\<cdot>ID) oo bar_rep
+    , baz_abs oo
+        u_map\<cdot>(cfun_map\<cdot>(convex_map\<cdot>(fst p))\<cdot>ID) oo baz_rep
+    )"
+unfolding foo_bar_baz_takeF_def by (rule beta_cfun, simp)
+
+definition
+  foo_take :: "nat \<Rightarrow> 'a foo \<rightarrow> 'a foo"
+where
+  "foo_take = (\<lambda>n. fst (iterate n\<cdot>foo_bar_baz_takeF\<cdot>\<bottom>))"
+
+definition
+  bar_take :: "nat \<Rightarrow> 'a bar \<rightarrow> 'a bar"
+where
+  "bar_take = (\<lambda>n. fst (snd (iterate n\<cdot>foo_bar_baz_takeF\<cdot>\<bottom>)))"
 
-lemma fix_foo_bar_baz_copy:
-  "fix\<cdot>foo_bar_baz_copy = (foo_map\<cdot>ID, bar_map\<cdot>ID, baz_map\<cdot>ID)"
-unfolding foo_map_def bar_map_def baz_map_def
-apply (subst beta_cfun, simp)+
-apply (subst pair_collapse)+
-apply (rule cfun_arg_cong)
-unfolding foo_bar_baz_mapF_def split_def
-unfolding foo_bar_baz_copy_def
-unfolding foo_copy_def bar_copy_def baz_copy_def
-apply (subst beta_cfun, simp)+
-apply (rule refl)
+definition
+  baz_take :: "nat \<Rightarrow> 'a baz \<rightarrow> 'a baz"
+where
+  "baz_take = (\<lambda>n. snd (snd (iterate n\<cdot>foo_bar_baz_takeF\<cdot>\<bottom>)))"
+
+lemma chain_take_thms: "chain foo_take" "chain bar_take" "chain baz_take"
+unfolding foo_take_def bar_take_def baz_take_def
+by (intro ch2ch_fst ch2ch_snd chain_iterate)+
+
+lemma take_0_thms: "foo_take 0 = \<bottom>" "bar_take 0 = \<bottom>" "baz_take 0 = \<bottom>"
+unfolding foo_take_def bar_take_def baz_take_def
+by (simp only: iterate_0 fst_strict snd_strict)+
+
+lemma take_Suc_thms:
+  "foo_take (Suc n) =
+    foo_abs oo ssum_map\<cdot>ID\<cdot>(sprod_map\<cdot>(u_map\<cdot>ID)\<cdot>(u_map\<cdot>(bar_take n))) oo foo_rep"
+  "bar_take (Suc n) =
+    bar_abs oo u_map\<cdot>(cfun_map\<cdot>(baz_take n)\<cdot>ID) oo bar_rep"
+  "baz_take (Suc n) =
+    baz_abs oo u_map\<cdot>(cfun_map\<cdot>(convex_map\<cdot>(foo_take n))\<cdot>ID) oo baz_rep"
+unfolding foo_take_def bar_take_def baz_take_def
+by (simp only: iterate_Suc foo_bar_baz_takeF_beta fst_conv snd_conv)+
+
+lemma lub_take_lemma:
+  "(\<Squnion>n. foo_take n, \<Squnion>n. bar_take n, \<Squnion>n. baz_take n)
+    = (foo_map\<cdot>(ID::'a \<rightarrow> 'a), bar_map\<cdot>(ID::'a \<rightarrow> 'a), baz_map\<cdot>(ID::'a \<rightarrow> 'a))"
+apply (simp only: thelub_Pair [symmetric] ch2ch_Pair chain_take_thms)
+apply (simp only: map_apply_thms pair_collapse)
+apply (simp only: fix_def2)
+apply (rule lub_eq)
+apply (rule nat.induct)
+apply (simp only: iterate_0 Pair_strict take_0_thms)
+apply (simp only: iterate_Suc Pair_fst_snd_eq fst_conv snd_conv
+                  foo_bar_baz_mapF_beta take_Suc_thms simp_thms)
 done
 
-lemma foo_reach: "fst (fix\<cdot>foo_bar_baz_copy)\<cdot>x = x"
-unfolding fix_foo_bar_baz_copy fst_conv snd_conv
-unfolding foo_map_ID by (rule ID1)
+lemma lub_foo_take: "(\<Squnion>n. foo_take n) = ID"
+apply (rule trans [OF _ foo_map_ID])
+using lub_take_lemma
+apply (elim Pair_inject)
+apply assumption
+done
 
-lemma bar_reach: "fst (snd (fix\<cdot>foo_bar_baz_copy))\<cdot>x = x"
-unfolding fix_foo_bar_baz_copy fst_conv snd_conv
-unfolding bar_map_ID by (rule ID1)
+lemma lub_bar_take: "(\<Squnion>n. bar_take n) = ID"
+apply (rule trans [OF _ bar_map_ID])
+using lub_take_lemma
+apply (elim Pair_inject)
+apply assumption
+done
 
-lemma baz_reach: "snd (snd (fix\<cdot>foo_bar_baz_copy))\<cdot>x = x"
-unfolding fix_foo_bar_baz_copy fst_conv snd_conv
-unfolding baz_map_ID by (rule ID1)
+lemma lub_baz_take: "(\<Squnion>n. baz_take n) = ID"
+apply (rule trans [OF _ baz_map_ID])
+using lub_take_lemma
+apply (elim Pair_inject)
+apply assumption
+done
 
 end
--- a/src/HOLCF/ex/Domain_ex.thy	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/HOLCF/ex/Domain_ex.thy	Wed Apr 21 11:23:04 2010 +0200
@@ -57,18 +57,16 @@
 *}
 
 domain 'a d7 = d7a "'a d7 \<oplus> int lift" | d7b "'a \<otimes> 'a d7" | d7c (lazy "'a d7 \<rightarrow> 'a")
-
-(* d7.ind is absent *)
+  -- "Indirect recursion detected, skipping proofs of (co)induction rules"
+(* d7.induct is absent *)
 
 text {*
-  Indirect recursion using previously-defined datatypes is currently
-  not allowed.  This restriction does not apply to the new definitional
-  domain package.
+  Indirect recursion is also allowed using previously-defined datatypes.
 *}
-(*
+
 domain 'a slist = SNil | SCons 'a "'a slist"
-domain 'a stree = STip | SBranch "'a stree slist" -- "illegal indirect recursion"
-*)
+
+domain 'a stree = STip | SBranch "'a stree slist"
 
 text {* Mutually-recursive datatypes can be defined using the @{text "and"} keyword. *}
 
@@ -104,6 +102,14 @@
 
 lemma "(x::triv) = \<bottom>" by (induct x, simp_all)
 
+text {* Lazy constructor arguments may have unpointed types. *}
+
+domain natlist = nnil | ncons (lazy "nat discr") natlist
+
+text {* Class constraints may be given for type parameters on the LHS. *}
+
+domain ('a::cpo) box = Box (lazy 'a)
+
 
 subsection {* Generated constants and theorems *}
 
@@ -192,24 +198,13 @@
 (*
 domain xx = xx ("x y")
   -- "Inner syntax error: unexpected end of input"
-
-domain 'a foo = foo (sel::"'a") ("a b")
-  -- {* Inner syntax error at "= UU" *}
-*)
-
-text {* Declaring class constraints on the LHS is currently broken. *}
-(*
-domain ('a::cpo) box = Box (lazy 'a)
-  -- "Proof failed"
 *)
 
 text {*
-  Class constraints on the RHS are not supported yet.  This feature is
-  planned to replace the old-style LHS class constraints.
+  Non-cpo type parameters currently do not work.
 *}
 (*
-domain 'a box = Box (lazy "'a::cpo")
-  -- {* Inconsistent sort constraint for type variable "'a" *}
+domain ('a::type) stream = snil | scons (lazy "'a discr") "'a stream"
 *)
 
 end
--- a/src/Pure/General/path.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/General/path.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -1,7 +1,8 @@
 (*  Title:      Pure/General/path.ML
     Author:     Markus Wenzel, TU Muenchen
 
-Abstract algebra of file paths (external encoding in Unix style).
+Abstract algebra of file paths: basic POSIX notation, extended by
+named roots (e.g. //foo) and variables (e.g. $BAR).
 *)
 
 signature PATH =
@@ -10,6 +11,7 @@
   val is_current: T -> bool
   val current: T
   val root: T
+  val named_root: string -> T
   val parent: T
   val basic: string -> T
   val variable: string -> T
@@ -31,10 +33,15 @@
 structure Path: PATH =
 struct
 
-
 (* path elements *)
 
-datatype elem = Root | Parent | Basic of string | Variable of string;
+datatype elem =
+  Root of string |
+  Basic of string |
+  Variable of string |
+  Parent;
+
+local
 
 fun err_elem msg chs = error (msg ^ " path element specification: " ^ quote (implode chs));
 
@@ -46,16 +53,18 @@
         [] => chs
       | bads => err_elem ("Illegal character(s) " ^ commas_quote bads ^ " in") chs);
 
+in
+
+val root_elem = Root o implode o check_elem;
 val basic_elem = Basic o implode o check_elem;
 val variable_elem = Variable o implode o check_elem;
 
-fun is_var (Variable _) = true
-  | is_var _ = false;
+end;
 
 
 (* type path *)
 
-datatype T = Path of elem list;
+datatype T = Path of elem list;    (*reversed elements*)
 
 fun rep (Path xs) = xs;
 
@@ -63,13 +72,16 @@
   | is_current _ = false;
 
 val current = Path [];
-val root = Path [Root];
-val parent = Path [Parent];
+val root = Path [Root ""];
+fun named_root s = Path [root_elem (explode s)];
 fun basic s = Path [basic_elem (explode s)];
 fun variable s = Path [variable_elem (explode s)];
+val parent = Path [Parent];
 
-fun is_absolute (Path (Root :: _)) = true
-  | is_absolute _ = false;
+fun is_absolute (Path xs) =
+  (case try List.last xs of
+    SOME (Root _) => true
+  | _ => false);
 
 fun is_basic (Path [Basic _]) = true
   | is_basic _ = false;
@@ -77,37 +89,42 @@
 
 (* append and norm *)
 
-(*append non-normal path (2n arg) to reversed normal one, result is normal*)
-fun rev_app xs [] = rev xs
-  | rev_app _ (Root :: ys) = rev_app [Root] ys
-  | rev_app (x :: xs) (Parent :: ys) =
-      if x = Parent orelse is_var x then rev_app (Parent :: x :: xs) ys
-      else if x = Root then rev_app (x :: xs) ys
-      else rev_app xs ys
-  | rev_app xs (y :: ys) = rev_app (y :: xs) ys;
+fun apply (y as Root _) _ = [y]
+  | apply Parent (xs as (Root _ :: _)) = xs
+  | apply Parent (Basic _ :: rest) = rest
+  | apply y xs = y :: xs;
 
-fun append (Path xs) (Path ys) = Path (rev_app (rev xs) ys);
+fun append (Path xs) (Path ys) = Path (fold_rev apply ys xs);
 fun appends paths = Library.foldl (uncurry append) (current, paths);
 val make = appends o map basic;
-fun norm path = rev_app [] path;
+
+fun norm elems = fold_rev apply elems [];
 
 
 (* implode *)
 
-fun implode_elem Root = ""
-  | implode_elem Parent = ".."
+local
+
+fun implode_elem (Root "") = ""
+  | implode_elem (Root s) = "//" ^ s
   | implode_elem (Basic s) = s
-  | implode_elem (Variable s) = "$" ^ s;
+  | implode_elem (Variable s) = "$" ^ s
+  | implode_elem Parent = "..";
+
+in
 
 fun implode_path (Path []) = "."
-  | implode_path (Path (Root :: xs)) = "/" ^ space_implode "/" (map implode_elem xs)
-  | implode_path (Path xs) = space_implode "/" (map implode_elem xs);
+  | implode_path (Path [Root ""]) = "/"
+  | implode_path (Path xs) = space_implode "/" (rev (map implode_elem xs));
+
+end;
 
 
 (* explode *)
 
-fun explode_elem "" = Root
-  | explode_elem ".." = Parent
+local
+
+fun explode_elem ".." = Parent
   | explode_elem "~" = Variable "HOME"
   | explode_elem "~~" = Variable "ISABELLE_HOME"
   | explode_elem s =
@@ -115,28 +132,35 @@
         "$" :: cs => variable_elem cs
       | cs => basic_elem cs);
 
-val explode_elems = map explode_elem o filter_out (fn c => c = "" orelse c = ".");
+val explode_elems =
+  rev o map explode_elem o filter_out (fn c => c = "" orelse c = ".");
+
+in
 
-fun explode_path str = Path (norm
-  (case space_explode "/" str of
-    "" :: ss => Root :: explode_elems ss
-  | ss => explode_elems ss));
+fun explode_path str =
+  let val (roots, raw_elems) =
+    (case take_prefix (equal "") (space_explode "/" str) |>> length of
+      (0, es) => ([], es)
+    | (1, es) => ([Root ""], es)
+    | (_, []) => ([Root ""], [])
+    | (_, e :: es) => ([root_elem (explode e)], es))
+  in Path (norm (explode_elems raw_elems @ roots)) end;
+
+end;
 
 
 (* base element *)
 
-fun split_path f (path as Path xs) =
-  (case try split_last xs of
-    SOME (prfx, Basic s) => f (prfx, s)
-  | _ => error ("Cannot split path into dir/base: " ^ quote (implode_path path)));
+fun split_path f (Path (Basic s :: xs)) = f (Path xs, s)
+  | split_path _ path = error ("Cannot split path into dir/base: " ^ quote (implode_path path));
 
-val dir = split_path (fn (prfx, _) => Path prfx);
+val dir = split_path #1;
 val base = split_path (fn (_, s) => Path [Basic s]);
 
-fun ext "" path = path
-  | ext e path = split_path (fn (prfx, s) => append (Path prfx) (basic (s ^ "." ^ e))) path;
+fun ext "" = I
+  | ext e = split_path (fn (prfx, s) => append prfx (basic (s ^ "." ^ e)));
 
-val split_ext = split_path (fn (prfx, s) => apfst (append (Path prfx))
+val split_ext = split_path (fn (prfx, s) => apfst (append prfx)
   (case take_suffix (fn c => c <> ".") (explode s) of
     ([], _) => (Path [Basic s], "")
   | (cs, e) => (Path [Basic (implode (take (length cs - 1) cs))], implode e)));
@@ -144,14 +168,20 @@
 
 (* expand variables *)
 
+local
+
 fun eval (Variable s) =
-    (case getenv s of
-      "" => error ("Undefined Isabelle environment variable: " ^ quote s)
-    | path => rep (explode_path path))
+      (case getenv s of
+        "" => error ("Undefined Isabelle environment variable: " ^ quote s)
+      | path => rep (explode_path path))
   | eval x = [x];
 
+in
+
 val expand = rep #> maps eval #> norm #> Path;
 
+end;
+
 
 (* source position *)
 
@@ -163,3 +193,4 @@
 val explode = explode_path;
 
 end;
+
--- a/src/Pure/Isar/class_target.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/Isar/class_target.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -333,8 +333,8 @@
     |> Sign.declare_const ((b, Type.strip_sorts ty'), mx)
     |> snd
     |> Thm.add_def false false (b_def, def_eq)
-    |>> Thm.varifyT_global
-    |-> (fn def_thm => PureThy.store_thm (b_def, def_thm)
+    |>> apsnd Thm.varifyT_global
+    |-> (fn (_, def_thm) => PureThy.store_thm (b_def, def_thm)
       #> snd
       #> null type_params ? register_operation class (c', (dict', SOME (Thm.symmetric def_thm))))
     |> Sign.add_const_constraint (c', SOME ty')
--- a/src/Pure/Isar/code.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/Isar/code.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -22,8 +22,6 @@
   (*constructor sets*)
   val constrset_of_consts: theory -> (string * typ) list
     -> string * ((string * sort) list * (string * typ list) list)
-  val abstype_cert: theory -> string * typ -> string
-    -> string * ((string * sort) list * ((string * typ) * (string * term)))
 
   (*code equations and certificates*)
   val mk_eqn: theory -> thm * bool -> thm * bool
@@ -37,8 +35,8 @@
   val cert_of_eqns: theory -> string -> (thm * bool) list -> cert
   val constrain_cert: theory -> sort list -> cert -> cert
   val typargs_deps_of_cert: theory -> cert -> (string * sort) list * (string * typ list) list
-  val equations_of_cert: theory -> cert ->
-    ((string * sort) list * typ) * ((string option * (term list * term)) * (thm option * bool)) list
+  val equations_of_cert: theory -> cert -> ((string * sort) list * typ)
+    * (((term * string option) list * (term * string option)) * (thm option * bool)) list
   val bare_thms_of_cert: theory -> cert -> thm list
   val pretty_cert: theory -> cert -> Pretty.T list
 
@@ -52,8 +50,7 @@
   val datatype_interpretation:
     (string * ((string * sort) list * (string * typ list) list)
       -> theory -> theory) -> theory -> theory
-  val add_abstype: string * typ -> string * typ -> theory -> Proof.state
-  val add_abstype_cmd: string -> string -> theory -> Proof.state
+  val add_abstype: thm -> theory -> theory
   val abstype_interpretation:
     (string * ((string * sort) list * ((string * typ) * (string * thm)))
       -> theory -> theory) -> theory -> theory
@@ -380,6 +377,7 @@
         val tfrees = Term.add_tfreesT ty [];
         val (tyco, vs) = ((apsnd o map) (dest_TFree) o dest_Type o snd o strip_type) ty
           handle TYPE _ => no_constr thy "bad type" c_ty
+        val _ = if tyco = "fun" then no_constr thy "bad type" c_ty else ();
         val _ = if has_duplicates (eq_fst (op =)) vs
           then no_constr thy "duplicate type variables in datatype" c_ty else ();
         val _ = if length tfrees <> length vs
@@ -411,22 +409,6 @@
     val cs''' = map (inst vs) cs'';
   in (tyco, (vs, rev cs''')) end;
 
-fun abstype_cert thy abs_ty rep =
-  let
-    val _ = pairself (fn c => if (is_some o AxClass.class_of_param thy) c
-      then error ("Is a class parameter: " ^ string_of_const thy c) else ()) (fst abs_ty, rep);
-    val ((tyco, sorts), (abs, (vs, ty'))) = ty_sorts thy abs_ty;
-    val (ty, ty_abs) = case ty'
-     of Type ("fun", [ty, ty_abs]) => (ty, ty_abs)
-      | _ => error ("Not a datatype abstractor:\n" ^ string_of_const thy abs
-          ^ " :: " ^ string_of_typ thy ty');
-    val _ = Thm.cterm_of thy (Const (rep, ty_abs --> ty)) handle TYPE _ =>
-      error ("Not a projection:\n" ^ string_of_const thy rep);
-    val param = Free ("x", ty_abs);
-    val cert = Logic.all param (Logic.mk_equals
-      (Const (abs, ty --> ty_abs) $ (Const (rep, ty_abs --> ty) $ param), param));
-  in (tyco, (vs ~~ sorts, ((fst abs_ty, ty), (rep, cert)))) end;    
-
 fun get_type_entry thy tyco = case these (Symtab.lookup ((the_types o the_exec) thy) tyco)
  of (_, entry) :: _ => SOME entry
   | _ => NONE;
@@ -441,7 +423,7 @@
 
 fun get_abstype_spec thy tyco = case get_type_entry thy tyco
  of SOME (vs, Abstractor spec) => (vs, spec)
-  | NONE => error ("Not an abstract type: " ^ tyco);
+  | _ => error ("Not an abstract type: " ^ tyco);
  
 fun get_type thy = fst o get_type_spec thy;
 
@@ -479,6 +461,16 @@
   in not (has_duplicates (op =) ((fold o fold_aterms)
     (fn Var (v, _) => cons v | _ => I) args [])) end;
 
+fun check_decl_ty thy (c, ty) =
+  let
+    val ty_decl = Sign.the_const_type thy c;
+  in if Sign.typ_equiv thy (Type.strip_sorts ty_decl, Type.strip_sorts ty) then ()
+    else bad_thm ("Type\n" ^ string_of_typ thy ty
+      ^ "\nof constant " ^ quote c
+      ^ "\nis incompatible with declared type\n"
+      ^ string_of_typ thy ty_decl)
+  end; 
+
 fun check_eqn thy { allow_nonlinear, allow_consts, allow_pats } thm (lhs, rhs) =
   let
     fun bad s = bad_thm (s ^ ":\n" ^ Display.string_of_thm_global thy thm);
@@ -514,7 +506,7 @@
               then ()
               else bad (quote c ^ " is not a constructor, on left hand side of equation")
             else bad ("Partially applied constant " ^ quote c ^ " on left hand side of equation")
-          end else bad ("Pattern not allowed, but constant " ^ quote c ^ " encountered on left hand side")
+          end else bad ("Pattern not allowed here, but constant " ^ quote c ^ " encountered on left hand side")
     val _ = map (check 0) args;
     val _ = if allow_nonlinear orelse is_linear thm then ()
       else bad "Duplicate variables on left hand side of equation";
@@ -524,13 +516,7 @@
       else bad "Constructor as head in equation";
     val _ = if not (is_abstr thy c) then ()
       else bad "Abstractor as head in equation";
-    val ty_decl = Sign.the_const_type thy c;
-    val _ = if Sign.typ_equiv thy (Type.strip_sorts ty_decl, Type.strip_sorts ty)
-      then () else bad_thm ("Type\n" ^ string_of_typ thy ty
-        ^ "\nof equation\n"
-        ^ Display.string_of_thm_global thy thm
-        ^ "\nis incompatible with declared function type\n"
-        ^ string_of_typ thy ty_decl)
+    val _ = check_decl_ty thy (c, ty);
   in () end;
 
 fun gen_assert_eqn thy check_patterns (thm, proper) =
@@ -551,17 +537,19 @@
            | THM _ => bad "Not a proper equation";
     val (rep, lhs) = dest_comb full_lhs
       handle TERM _ => bad "Not an abstract equation";
-    val tyco = (fst o dest_Type o domain_type o snd o dest_Const) rep
+    val (rep_const, ty) = dest_Const rep;
+    val (tyco, sorts) = ((apsnd o map) (snd o dest_TVar) o dest_Type o domain_type) ty
       handle TERM _ => bad "Not an abstract equation";
     val _ = case some_tyco of SOME tyco' => if tyco = tyco' then ()
           else bad ("Abstract type mismatch:" ^ quote tyco ^ " vs. " ^ quote tyco')
       | NONE => ();
-    val (_, (_, (rep', _))) = get_abstype_spec thy tyco;
-    val rep_const = (fst o dest_Const) rep;
+    val (vs', (_, (rep', _))) = get_abstype_spec thy tyco;
     val _ = if rep_const = rep' then ()
       else bad ("Projection mismatch: " ^ quote rep_const ^ " vs. " ^ quote rep');
     val _ = check_eqn thy { allow_nonlinear = false,
       allow_consts = false, allow_pats = false } thm (lhs, rhs);
+    val _ = if forall (Sign.subsort thy) (sorts ~~ map snd  vs') then ()
+      else error ("Sort constraints on type arguments are weaker than in abstype certificate.")
   in (thm, tyco) end;
 
 fun assert_eqn thy = error_thm (gen_assert_eqn thy true);
@@ -657,6 +645,31 @@
 fun canonize_thms thy = desymbolize_tvars thy #> same_arity thy #> map (desymbolize_vars thy);
 
 
+(* abstype certificates *)
+
+fun check_abstype_cert thy proto_thm =
+  let
+    val thm = (AxClass.unoverload thy o meta_rewrite thy) proto_thm;
+    fun bad s = bad_thm (s ^ ":\n" ^ Display.string_of_thm_global thy thm);
+    val (lhs, rhs) = Logic.dest_equals (Thm.plain_prop_of thm)
+      handle TERM _ => bad "Not an equation"
+           | THM _ => bad "Not a proper equation";
+    val ((abs, raw_ty), ((rep, rep_ty), param)) = (apsnd (apfst dest_Const o dest_comb)
+        o apfst dest_Const o dest_comb) lhs
+      handle TERM _ => bad "Not an abstype certificate";
+    val _ = pairself (fn c => if (is_some o AxClass.class_of_param thy) c
+      then error ("Is a class parameter: " ^ string_of_const thy c) else ()) (abs, rep);
+    val _ = check_decl_ty thy (abs, raw_ty);
+    val _ = check_decl_ty thy (rep, rep_ty);
+    val var = (fst o dest_Var) param
+      handle TERM _ => bad "Not an abstype certificate";
+    val _ = if param = rhs then () else bad "Not an abstype certificate";
+    val ((tyco, sorts), (abs, (vs, ty'))) = ty_sorts thy (abs, Logic.unvarifyT_global raw_ty);
+    val ty = domain_type ty';
+    val ty_abs = range_type ty';
+  in (tyco, (vs ~~ sorts, ((abs, ty), (rep, thm)))) end;
+
+
 (* code equation certificates *)
 
 fun build_head thy (c, ty) =
@@ -804,7 +817,7 @@
       let
         val vs = fst (typscheme_abs thy abs_thm);
         val (_, concrete_thm) = concretify_abs thy tyco abs_thm;
-      in (vs, add_rhss_of_eqn thy (Thm.prop_of abs_thm) []) end;
+      in (vs, add_rhss_of_eqn thy (map_types Logic.unvarifyT_global (Thm.prop_of concrete_thm)) []) end;
 
 fun equations_of_cert thy (cert as Equations (cert_thm, propers)) =
       let
@@ -814,22 +827,23 @@
           |> Local_Defs.expand [snd (get_head thy cert_thm)]
           |> Thm.varifyT_global
           |> Conjunction.elim_balanced (length propers);
-      in (tyscm, map (pair NONE o dest_eqn thy o Thm.prop_of) thms ~~ (map SOME thms ~~ propers)) end
+        fun abstractions (args, rhs) = (map (rpair NONE) args, (rhs, NONE));
+      in (tyscm, map (abstractions o dest_eqn thy o Thm.prop_of) thms ~~ (map SOME thms ~~ propers)) end
   | equations_of_cert thy (Projection (t, tyco)) =
       let
         val (_, ((abs, _), _)) = get_abstype_spec thy tyco;
         val tyscm = typscheme_projection thy t;
         val t' = map_types Logic.varifyT_global t;
-      in (tyscm, [((SOME abs, dest_eqn thy t'), (NONE, true))]) end
+        fun abstractions (args, rhs) = (map (rpair (SOME abs)) args, (rhs, NONE));
+      in (tyscm, [((abstractions o dest_eqn thy) t', (NONE, true))]) end
   | equations_of_cert thy (Abstract (abs_thm, tyco)) =
       let
         val tyscm = typscheme_abs thy abs_thm;
         val (abs, concrete_thm) = concretify_abs thy tyco abs_thm;
-        val _ = fold_aterms (fn Const (c, _) => if c = abs
-          then error ("Abstraction violation in abstract code equation\n" ^ Display.string_of_thm_global thy abs_thm)
-          else I | _ => I) (Thm.prop_of abs_thm);
+        fun abstractions (args, rhs) = (map (rpair NONE) args, (rhs, (SOME abs)));
       in
-        (tyscm, [((SOME abs, dest_eqn thy (Thm.prop_of concrete_thm)), (SOME (Thm.varifyT_global abs_thm), true))])
+        (tyscm, [((abstractions o dest_eqn thy o Thm.prop_of) concrete_thm,
+          (SOME (Thm.varifyT_global abs_thm), true))])
       end;
 
 fun pretty_cert thy (cert as Equations _) =
@@ -1104,12 +1118,13 @@
     val (old_constrs, some_old_proj) =
       case these (Symtab.lookup ((the_types o the_exec) thy) tyco)
        of (_, (_, Constructors cos)) :: _ => (map fst cos, NONE)
-        | (_, (_, Abstractor (_, (co, _)))) :: _ => ([], SOME co)
+        | (_, (_, Abstractor ((co, _), (proj, _)))) :: _ => ([co], SOME proj)
         | [] => ([], NONE)
     val outdated_funs = case some_old_proj
      of NONE => old_constrs
       | SOME old_proj => Symtab.fold
-          (fn (c, ((_, spec), _)) => if member (op =) (the_list (associated_abstype spec)) tyco
+          (fn (c, ((_, spec), _)) =>
+            if member (op =) (the_list (associated_abstype spec)) tyco
             then insert (op =) c else I)
             ((the_functions o the_exec) thy) (old_proj :: old_constrs);
     fun drop_outdated_cases cases = fold Symtab.delete_safe
@@ -1152,25 +1167,19 @@
 fun abstype_interpretation f = Abstype_Interpretation.interpretation
   (fn (tyco, _) => fn thy => f (tyco, get_abstype_spec thy tyco) thy);
 
-fun add_abstype proto_abs proto_rep thy =
+fun add_abstype proto_thm thy =
   let
-    val (abs, rep) = pairself (unoverload_const_typ thy) (proto_abs, proto_rep);
-    val (tyco, (vs, (abs_ty as (abs, ty), (rep, cert_prop)))) = abstype_cert thy abs (fst rep);
-    fun after_qed [[cert]] = ProofContext.theory
-      (del_eqns abs
-      #> register_type (tyco, (vs, Abstractor (abs_ty, (rep, cert))))
-      #> change_fun_spec false rep ((K o Proj)
-        (map_types Logic.varifyT_global (mk_proj tyco vs ty abs rep), tyco))
-      #> Abstype_Interpretation.data (tyco, serial ()));
+    val (tyco, (vs, (abs_ty as (abs, ty), (rep, cert)))) =
+      error_thm (check_abstype_cert thy) proto_thm;
   in
     thy
-    |> ProofContext.init
-    |> Proof.theorem_i NONE after_qed [[(cert_prop, [])]]
+    |> del_eqns abs
+    |> register_type (tyco, (vs, Abstractor (abs_ty, (rep, cert))))
+    |> change_fun_spec false rep ((K o Proj)
+        (map_types Logic.varifyT_global (mk_proj tyco vs ty abs rep), tyco))
+    |> Abstype_Interpretation.data (tyco, serial ())
   end;
 
-fun add_abstype_cmd raw_abs raw_rep thy =
-  add_abstype (read_bare_const thy raw_abs) (read_bare_const thy raw_rep) thy;
-
 
 (** infrastructure **)
 
@@ -1200,6 +1209,7 @@
     val code_attribute_parser =
       Args.del |-- Scan.succeed (mk_attribute del_eqn)
       || Args.$$$ "nbe" |-- Scan.succeed (mk_attribute add_nbe_eqn)
+      || Args.$$$ "abstype" |-- Scan.succeed (mk_attribute add_abstype)
       || Args.$$$ "abstract" |-- Scan.succeed (mk_attribute add_abs_eqn)
       || (Args.$$$ "target" |-- Args.colon |-- Args.name >>
            (mk_attribute o code_target_attr))
--- a/src/Pure/Isar/isar_cmd.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/Isar/isar_cmd.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -19,7 +19,10 @@
   val declaration: bool -> Symbol_Pos.text * Position.T -> local_theory -> local_theory
   val simproc_setup: string -> string list -> Symbol_Pos.text * Position.T -> string list ->
     local_theory -> local_theory
-  val hide_names: bool -> string * xstring list -> theory -> theory
+  val hide_class: bool -> xstring list -> theory -> theory
+  val hide_type: bool -> xstring list -> theory -> theory
+  val hide_const: bool -> xstring list -> theory -> theory
+  val hide_fact: bool -> xstring list -> theory -> theory
   val have: (Attrib.binding * (string * string list) list) list -> bool -> Proof.state -> Proof.state
   val hence: (Attrib.binding * (string * string list) list) list -> bool -> Proof.state -> Proof.state
   val show: (Attrib.binding * (string * string list) list) list -> bool -> Proof.state -> Proof.state
@@ -196,23 +199,19 @@
 
 (* hide names *)
 
-val hide_kinds =
- [("class", (Sign.intern_class, can o Sign.certify_class, Sign.hide_class)),
-  ("type", (Sign.intern_type, Sign.declared_tyname, Sign.hide_type)),
-  ("const", (Sign.intern_const, Sign.declared_const, Sign.hide_const)),
-  ("fact", (PureThy.intern_fact, PureThy.defined_fact, PureThy.hide_fact))];
+fun hide_names intern check hide fully xnames thy =
+  let
+    val names = map (intern thy) xnames;
+    val bads = filter_out (check thy) names;
+  in
+    if null bads then fold (hide fully) names thy
+    else error ("Attempt to hide undeclared item(s): " ^ commas_quote bads)
+  end;
 
-fun hide_names b (kind, xnames) thy =
-  (case AList.lookup (op =) hide_kinds kind of
-    SOME (intern, check, hide) =>
-      let
-        val names = map (intern thy) xnames;
-        val bads = filter_out (check thy) names;
-      in
-        if null bads then fold (hide b) names thy
-        else error ("Attempt to hide undeclared item(s): " ^ commas_quote bads)
-      end
-  | NONE => error ("Bad name space specification: " ^ quote kind));
+val hide_class = hide_names Sign.intern_class (can o Sign.certify_class) Sign.hide_class;
+val hide_type = hide_names Sign.intern_type Sign.declared_tyname Sign.hide_type;
+val hide_const = hide_names Sign.intern_const Sign.declared_const Sign.hide_const;
+val hide_fact = hide_names PureThy.intern_fact PureThy.defined_fact PureThy.hide_fact;
 
 
 (* goals *)
--- a/src/Pure/Isar/isar_syn.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/Isar/isar_syn.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -108,10 +108,10 @@
       >> (fn ((args, a), mx) => Typedecl.typedecl (a, map (rpair dummyS) args, mx) #> snd));
 
 val _ =
-  OuterSyntax.command "types" "declare type abbreviations" K.thy_decl
+  OuterSyntax.local_theory "types" "declare type abbreviations" K.thy_decl
     (Scan.repeat1
       (P.type_args -- P.binding -- (P.$$$ "=" |-- P.!!! (P.typ -- P.opt_mixfix')))
-      >> (Toplevel.theory o Sign.add_tyabbrs o map (fn ((args, a), (T, mx)) => (a, args, T, mx))));
+      >> (fold (fn ((args, a), (rhs, mx)) => snd o Typedecl.abbrev_cmd (a, args, mx) rhs)));
 
 val _ =
   OuterSyntax.command "nonterminals" "declare types treated as grammar nonterminal symbols"
@@ -280,10 +280,15 @@
   OuterSyntax.command "local" "enable prefixing of theory name" K.thy_decl
     (Scan.succeed (Toplevel.theory Sign.local_path));
 
-val _ =
-  OuterSyntax.command "hide" "hide names from given name space" K.thy_decl
-    ((P.opt_keyword "open" >> not) -- (P.name -- Scan.repeat1 P.xname) >>
-      (Toplevel.theory o uncurry IsarCmd.hide_names));
+fun hide_names name hide what =
+  OuterSyntax.command name ("hide " ^ what ^ " from name space") K.thy_decl
+    ((P.opt_keyword "open" >> not) -- Scan.repeat1 P.xname >>
+      (Toplevel.theory o uncurry hide));
+
+val _ = hide_names "hide_class" IsarCmd.hide_class "classes";
+val _ = hide_names "hide_type" IsarCmd.hide_type "types";
+val _ = hide_names "hide_const" IsarCmd.hide_const "constants";
+val _ = hide_names "hide_fact" IsarCmd.hide_fact "facts";
 
 
 (* use ML text *)
@@ -480,11 +485,6 @@
   OuterSyntax.command "code_datatype" "define set of code datatype constructors" K.thy_decl
     (Scan.repeat1 P.term >> (Toplevel.theory o Code.add_datatype_cmd));
 
-val _ =
-  OuterSyntax.command "code_abstype" "define abstract code type" K.thy_goal
-    (P.term -- P.term >> (fn (abs, rep) => Toplevel.print
-      o Toplevel.theory_to_proof (Code.add_abstype_cmd abs rep)));
-
 
 
 (** proof commands **)
--- a/src/Pure/Isar/locale.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/Isar/locale.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -95,15 +95,9 @@
   intros: thm option * thm option,
   axioms: thm list,
   (** dynamic part **)
-(* <<<<<<< local
-  decls: (declaration * serial) list * (declaration * serial) list,
-    (* type and term syntax declarations *)
-  notes: ((string * (Attrib.binding * (thm list * Attrib.src list) list) list) * serial) list,
-======= *)
   syntax_decls: (declaration * serial) list,
     (* syntax declarations *)
   notes: ((string * (Attrib.binding * (thm list * Attrib.src list) list) list) * serial) list,
-(* >>>>>>> other *)
     (* theorem declarations *)
   dependencies: ((string * morphism) * serial) list
     (* locale dependencies (sublocale relation) *)
@@ -148,13 +142,8 @@
   thy |> Locales.map (Name_Space.define true (Sign.naming_of thy)
     (binding,
       mk_locale ((parameters, spec, intros, axioms),
-(* <<<<<<< local
-        ((pairself (map (fn decl => (decl, serial ()))) decls, map (fn n => (n, serial ())) notes),
-          map (fn d => (d, serial ())) dependencies))) #> snd);
-======= *)
         ((map (fn decl => (decl, serial ())) syntax_decls, map (fn n => (n, serial ())) notes),
           map (fn d => (d, serial ())) dependencies))) #> snd);
-(* >>>>>>> other *)
 
 fun change_locale name =
   Locales.map o apsnd o Symtab.map_entry name o map_locale o apsnd;
@@ -527,16 +516,7 @@
 
 (* Declarations *)
 
-(* <<<<<<< local
-local
-
-fun decl_attrib decl phi = Thm.declaration_attribute (K (decl phi));
-
-fun add_decls add loc decl =
-  ProofContext.theory ((change_locale loc o apfst o apfst) (add (decl, serial ()))) #>
-======= *)
 fun add_declaration loc decl =
-(* >>>>>>> other *)
   add_thmss loc ""
     [((Binding.conceal Binding.empty,
         [Attrib.internal (fn phi => Thm.declaration_attribute (K (decl phi)))]),
--- a/src/Pure/Isar/overloading.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/Isar/overloading.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -165,8 +165,9 @@
 
 fun declare c_ty = pair (Const c_ty);
 
-fun define checked b (c, t) = Thm.add_def (not checked) true
-  (b, Logic.mk_equals (Const (c, Term.fastype_of t), t));
+fun define checked b (c, t) =
+  Thm.add_def (not checked) true (b, Logic.mk_equals (Const (c, Term.fastype_of t), t))
+  #>> snd;
 
 fun confirm b = map_overloading (filter_out (fn (_, (c', _)) => c' = Binding.name_of b))
   #> Local_Theory.target synchronize_syntax
--- a/src/Pure/Isar/proof_context.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/Isar/proof_context.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -62,6 +62,8 @@
   val read_const_proper: Proof.context -> bool -> string -> term
   val read_const: Proof.context -> bool -> string -> term
   val allow_dummies: Proof.context -> Proof.context
+  val check_tvar: Proof.context -> indexname * sort -> indexname * sort
+  val check_tfree: Proof.context -> string * sort -> string * sort
   val decode_term: Proof.context -> term -> term
   val standard_infer_types: Proof.context -> term list -> term list
   val read_term_pattern: Proof.context -> string -> term
@@ -606,19 +608,26 @@
 
 (* types *)
 
-fun get_sort ctxt def_sort raw_env =
+fun get_sort ctxt raw_env =
   let
     val tsig = tsig_of ctxt;
 
     fun eq ((xi, S), (xi', S')) =
       Term.eq_ix (xi, xi') andalso Type.eq_sort tsig (S, S');
     val env = distinct eq raw_env;
-    val _ = (case duplicates (eq_fst (op =)) env of [] => ()
+    val _ =
+      (case duplicates (eq_fst (op =)) env of
+        [] => ()
       | dups => error ("Inconsistent sort constraints for type variable(s) "
           ^ commas_quote (map (Term.string_of_vname' o fst) dups)));
 
+    fun lookup xi =
+      (case AList.lookup (op =) env xi of
+        NONE => NONE
+      | SOME S => if S = dummyS then NONE else SOME S);
+
     fun get xi =
-      (case (AList.lookup (op =) env xi, def_sort xi) of
+      (case (lookup xi, Variable.def_sort ctxt xi) of
         (NONE, NONE) => Type.defaultS tsig
       | (NONE, SOME S) => S
       | (SOME S, NONE) => S
@@ -629,6 +638,9 @@
             " for type variable " ^ quote (Term.string_of_vname' xi)));
   in get end;
 
+fun check_tvar ctxt (xi, S) = (xi, get_sort ctxt [(xi, S)] xi);
+fun check_tfree ctxt (x, S) = apfst fst (check_tvar ctxt ((x, ~1), S));
+
 local
 
 fun intern_skolem ctxt def_type x =
@@ -647,7 +659,7 @@
 in
 
 fun term_context ctxt =
-  {get_sort = get_sort ctxt (Variable.def_sort ctxt),
+  {get_sort = get_sort ctxt,
    map_const = fn a => ((true, #1 (Term.dest_Const (read_const_proper ctxt false a)))
      handle ERROR _ => (false, Consts.intern (consts_of ctxt) a)),
    map_free = intern_skolem ctxt (Variable.def_type ctxt false)};
@@ -731,9 +743,8 @@
 
 fun parse_typ ctxt text =
   let
-    val get_sort = get_sort ctxt (Variable.def_sort ctxt);
     val (syms, pos) = Syntax.parse_token Markup.typ text;
-    val T = Syntax.standard_parse_typ ctxt (syn_of ctxt) get_sort (syms, pos)
+    val T = Syntax.standard_parse_typ ctxt (syn_of ctxt) (get_sort ctxt) (syms, pos)
       handle ERROR msg => cat_error msg ("Failed to parse type" ^ Position.str_of pos);
   in T end;
 
--- a/src/Pure/Isar/specification.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/Isar/specification.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -173,7 +173,7 @@
         fold_map Thm.add_axiom
           (map (apfst (fn a => Binding.map_name (K a) b))
             (PureThy.name_multi (Name.of_binding b) (map subst props)))
-        #>> (fn ths => ((b, atts), [(ths, [])])));
+        #>> (fn ths => ((b, atts), [(map #2 ths, [])])));
 
     (*facts*)
     val (facts, facts_lthy) = axioms_thy
--- a/src/Pure/Isar/theory_target.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/Isar/theory_target.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -315,7 +315,7 @@
         | NONE =>
             if is_some (Class_Target.instantiation_param lthy b)
             then AxClass.define_overloaded name' (head, rhs')
-            else Thm.add_def false false (name', Logic.mk_equals (lhs', rhs')));
+            else Thm.add_def false false (name', Logic.mk_equals (lhs', rhs')) #>> snd);
     val def = Local_Defs.trans_terms lthy3
       [(*c == global.c xs*)     local_def,
        (*global.c xs == rhs'*)  global_def,
--- a/src/Pure/Isar/typedecl.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/Isar/typedecl.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -1,21 +1,29 @@
 (*  Title:      Pure/Isar/typedecl.ML
     Author:     Makarius
 
-Type declarations (within the object logic).
+Type declarations (with object-logic arities) and type abbreviations.
 *)
 
 signature TYPEDECL =
 sig
   val read_constraint: Proof.context -> string option -> sort
-  val predeclare_constraints: binding * (string * sort) list * mixfix ->
-    local_theory -> string * local_theory
+  val basic_typedecl: binding * int * mixfix -> local_theory -> string * local_theory
   val typedecl: binding * (string * sort) list * mixfix -> local_theory -> typ * local_theory
   val typedecl_global: binding * (string * sort) list * mixfix -> theory -> typ * theory
+  val abbrev: binding * string list * mixfix -> typ -> local_theory -> string * local_theory
+  val abbrev_cmd: binding * string list * mixfix -> string -> local_theory -> string * local_theory
+  val abbrev_global: binding * string list * mixfix -> typ -> theory -> string * theory
 end;
 
 structure Typedecl: TYPEDECL =
 struct
 
+(* constraints *)
+
+fun read_constraint _ NONE = dummyS
+  | read_constraint ctxt (SOME s) = Syntax.read_sort ctxt s;
+
+
 (* primitives *)
 
 fun object_logic_arity name thy =
@@ -23,46 +31,44 @@
     NONE => thy
   | SOME S => AxClass.axiomatize_arity (name, replicate (Sign.arity_number thy name) S, S) thy);
 
-fun basic_typedecl (b, n, mx) lthy =
+fun basic_decl decl (b, n, mx) lthy =
   let val name = Local_Theory.full_name lthy b in
     lthy
-    |> Local_Theory.theory (Sign.add_types [(b, n, NoSyn)] #> object_logic_arity name)
+    |> Local_Theory.theory (decl name)
     |> Local_Theory.type_notation true Syntax.mode_default [(Type (name, replicate n dummyT), mx)]
     |> Local_Theory.type_alias b name
     |> pair name
   end;
 
-
-(* syntactic version -- useful for internalizing additional types/terms beforehand *)
-
-fun read_constraint _ NONE = dummyS
-  | read_constraint ctxt (SOME s) = Syntax.read_sort ctxt s;
-
-fun predeclare_constraints (b, raw_args, mx) =
-  basic_typedecl (b, length raw_args, mx) ##>
-  fold (Variable.declare_constraints o Logic.mk_type o TFree) raw_args;
+fun basic_typedecl (b, n, mx) =
+  basic_decl (fn name => Sign.add_types [(b, n, NoSyn)] #> object_logic_arity name) (b, n, mx);
 
 
-(* regular version -- without dependencies on type parameters of the context *)
+(* global type -- without dependencies on type parameters of the context *)
 
-fun typedecl (b, raw_args, mx) lthy =
+fun global_type lthy (b, raw_args) =
   let
     fun err msg = error (msg ^ " in type declaration " ^ quote (Binding.str_of b));
 
     val _ = has_duplicates (eq_fst op =) raw_args andalso err "Duplicate parameters";
-    val args = raw_args
-      |> map (fn (a, S) => (a, if S = dummyS then ProofContext.default_sort lthy (a, ~1) else S));
-    val T = Type (Local_Theory.full_name lthy b, map TFree args);
+    val args = map (TFree o ProofContext.check_tfree lthy) raw_args;
+    val T = Type (Local_Theory.full_name lthy b, args);
 
     val bad_args =
       #2 (Term.dest_Type (Logic.type_map (singleton (Variable.polymorphic lthy)) T))
       |> filter_out Term.is_TVar;
-    val _ = not (null bad_args) andalso
+    val _ = null bad_args orelse
       err ("Locally fixed type arguments " ^
         commas_quote (map (Syntax.string_of_typ lthy) bad_args));
-  in
+  in T end;
+
+
+(* type declarations *)
+
+fun typedecl (b, raw_args, mx) lthy =
+  let val T = global_type lthy (b, raw_args) in
     lthy
-    |> basic_typedecl (b, length args, mx)
+    |> basic_typedecl (b, length raw_args, mx)
     |> snd
     |> Variable.declare_typ T
     |> pair T
@@ -73,5 +79,28 @@
   #> typedecl decl
   #> Local_Theory.exit_result_global Morphism.typ;
 
+
+(* type abbreviations *)
+
+fun gen_abbrev prep_typ (b, vs, mx) raw_rhs lthy =
+  let
+    val Type (name, _) = global_type lthy (b, map (rpair dummyS) vs);
+    val rhs = prep_typ lthy raw_rhs
+      handle ERROR msg => cat_error msg ("in type abbreviation " ^ quote (Binding.str_of b));
+  in
+    lthy
+    |> basic_decl (fn _ => Sign.add_type_abbrev (b, vs, rhs)) (b, length vs, mx)
+    |> snd
+    |> pair name
+  end;
+
+val abbrev = gen_abbrev ProofContext.cert_typ_syntax;
+val abbrev_cmd = gen_abbrev ProofContext.read_typ_syntax;
+
+fun abbrev_global decl rhs =
+  Theory_Target.init NONE
+  #> abbrev decl rhs
+  #> Local_Theory.exit_result_global (K I);
+
 end;
 
--- a/src/Pure/ML-Systems/polyml.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/ML-Systems/polyml.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -66,3 +66,6 @@
   use_text context (1, "pp") false
     ("PolyML.addPrettyPrinter (fn _ => fn _ => ml_pretty o Pretty.to_ML o (" ^ pp ^ "))");
 
+val ml_make_string =
+  "(fn x => Pretty.string_of (Pretty.from_ML (pretty_ml (PolyML.prettyRepresentation (x, get_print_depth ())))))";
+
--- a/src/Pure/ML-Systems/polyml_common.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/ML-Systems/polyml_common.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -55,7 +55,7 @@
 fun ml_prompts p1 p2 = (PolyML.Compiler.prompt1 := p1; PolyML.Compiler.prompt2 := p2);
 
 
-(* print depth *)
+(* toplevel printing *)
 
 local
   val depth = Unsynchronized.ref 10;
@@ -66,6 +66,8 @@
 
 val error_depth = PolyML.error_depth;
 
+val ml_make_string = "PolyML.makestring";
+
 
 
 (** interrupts **)
--- a/src/Pure/ML-Systems/smlnj.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/ML-Systems/smlnj.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -61,6 +61,8 @@
     Control.Print.printLength := dest_int n);
 end;
 
+val ml_make_string = "(fn _ => \"?\")";
+
 
 (*prompts*)
 fun ml_prompts p1 p2 =
--- a/src/Pure/ML/ml_antiquote.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/ML/ml_antiquote.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -59,6 +59,8 @@
 
 structure P = OuterParse;
 
+val _ = inline "make_string" (Scan.succeed ml_make_string);
+
 val _ = value "binding"
   (Scan.lift (P.position Args.name) >> (fn name => ML_Syntax.atomic (ML_Syntax.make_binding name)));
 
--- a/src/Pure/ML/ml_env.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/ML/ml_env.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -9,6 +9,7 @@
   val inherit: Context.generic -> Context.generic -> Context.generic
   val name_space: ML_Name_Space.T
   val local_context: use_context
+  val check_functor: string -> unit
 end
 
 structure ML_Env: ML_ENV =
@@ -88,5 +89,11 @@
   print = writeln,
   error = error};
 
+val is_functor = is_some o #lookupFunct name_space;
+
+fun check_functor name =
+  if not (is_functor "Table") (*mask dummy version of name_space*) orelse is_functor name then ()
+  else error ("Unknown ML functor: " ^ quote name);
+
 end;
 
--- a/src/Pure/ProofGeneral/proof_general_pgip.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/ProofGeneral/proof_general_pgip.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -955,7 +955,7 @@
            end)
         | _ => raise PGIP "Invalid PGIP packet received")
      handle PGIP msg =>
-            (Output.error_msg ((msg ^ "\nPGIP error occured in XML text below:\n") ^
+            (Output.error_msg ((msg ^ "\nPGIP error occurred in XML text below:\n") ^
                                (XML.string_of xml));
              true))
 
--- a/src/Pure/Syntax/syntax.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/Syntax/syntax.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -290,15 +290,14 @@
     val SynExt.SynExt {xprods, consts = consts2, prmodes = prmodes2,
       parse_ast_translation, parse_rules, parse_translation, print_translation, print_rules,
       print_ast_translation, token_translation} = syn_ext;
-    val input' = if inout then fold (insert (op =)) xprods input else input;
-    val changed = length input <> length input';
+    val new_xprods =
+      if inout then distinct (op =) (filter_out (member (op =) input) xprods) else [];
     fun if_inout xs = if inout then xs else [];
   in
     Syntax
-    ({input = input',
-      lexicon =
-        if changed then fold Scan.extend_lexicon (SynExt.delims_of xprods) lexicon else lexicon,
-      gram = if changed then Parser.extend_gram gram xprods else gram,
+    ({input = new_xprods @ input,
+      lexicon = fold Scan.extend_lexicon (SynExt.delims_of new_xprods) lexicon,
+      gram = Parser.extend_gram gram new_xprods,
       consts = Library.merge (op =) (consts1, filter_out Lexicon.is_marked consts2),
       prmodes = insert (op =) mode (Library.merge (op =) (prmodes1, prmodes2)),
       parse_ast_trtab =
--- a/src/Pure/System/cygwin.scala	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/System/cygwin.scala	Wed Apr 21 11:23:04 2010 +0200
@@ -91,9 +91,9 @@
 
   def check_root(): String =
   {
-    val root_env = java.lang.System.getenv("CYGWIN_ROOT")
+    val this_cygwin = java.lang.System.getenv("THIS_CYGWIN")
     val root =
-      if (root_env != null && root_env != "") root_env
+      if (this_cygwin != null && this_cygwin != "") this_cygwin
       else
         query_registry(CYGWIN_SETUP1, "rootdir") orElse
         query_registry(CYGWIN_SETUP2, "rootdir") getOrElse
--- a/src/Pure/System/gui_setup.scala	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/System/gui_setup.scala	Wed Apr 21 11:23:04 2010 +0200
@@ -43,19 +43,17 @@
     }
 
     // values
-    Platform.defaults match {
-      case None =>
-      case Some((name, None)) => text.append("Platform: " + name + "\n")
-      case Some((name1, Some(name2))) =>
-        text.append("Main platform: " + name1 + "\n")
-        text.append("Alternative platform: " + name2 + "\n")
-    }
-    if (Platform.is_windows) {
+    if (Platform.is_windows)
       text.append("Cygwin root: " + Cygwin.check_root() + "\n")
-    }
+    text.append("JVM platform: " + Platform.jvm_platform + "\n")
     try {
       val isabelle_system = new Isabelle_System
+      text.append("ML platform: " + isabelle_system.getenv("ML_PLATFORM") + "\n")
+      text.append("Isabelle platform: " + isabelle_system.getenv("ISABELLE_PLATFORM") + "\n")
+      val platform64 = isabelle_system.getenv("ISABELLE_PLATFORM64")
+      if (platform64 != "") text.append("Isabelle platform (64 bit): " + platform64 + "\n")
       text.append("Isabelle home: " + isabelle_system.getenv("ISABELLE_HOME") + "\n")
+      text.append("Isabelle java: " + isabelle_system.this_java() + "\n")
     } catch {
       case e: RuntimeException => text.append(e.getMessage + "\n")
     }
--- a/src/Pure/System/isabelle_system.scala	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/System/isabelle_system.scala	Wed Apr 21 11:23:04 2010 +0200
@@ -26,7 +26,8 @@
   {
     import scala.collection.JavaConversions._
 
-    val env0 = Map(java.lang.System.getenv.toList: _*)
+    val env0 = Map(java.lang.System.getenv.toList: _*) +
+      ("THIS_JAVA" -> this_java())
 
     val isabelle_home =
       env0.get("ISABELLE_HOME") match {
@@ -88,31 +89,39 @@
 
   /* expand_path */
 
+  private val Root = new Regex("(//+[^/]*|/)(.*)")
+  private val Only_Root = new Regex("//+[^/]*|/")
+
   def expand_path(isabelle_path: String): String =
   {
     val result_path = new StringBuilder
-    def init(path: String)
+    def init(path: String): String =
     {
-      if (path.startsWith("/")) {
-        result_path.clear
-        result_path += '/'
+      path match {
+        case Root(root, rest) =>
+          result_path.clear
+          result_path ++= root
+          rest
+        case _ => path
       }
     }
     def append(path: String)
     {
-      init(path)
-      for (p <- path.split("/") if p != "" && p != ".") {
+      val rest = init(path)
+      for (p <- rest.split("/") if p != "" && p != ".") {
         if (p == "..") {
           val result = result_path.toString
-          val i = result.lastIndexOf("/")
-          if (result == "")
-            result_path ++= ".."
-          else if (result.substring(i + 1) == "..")
-            result_path ++= "/.."
-          else if (i < 1)
-            result_path.length = i + 1
-          else
-            result_path.length = i
+          if (!Only_Root.pattern.matcher(result).matches) {
+            val i = result.lastIndexOf("/")
+            if (result == "")
+              result_path ++= ".."
+            else if (result.substring(i + 1) == "..")
+              result_path ++= "/.."
+            else if (i < 0)
+              result_path.length = 0
+            else
+              result_path.length = i
+          }
         }
         else {
           val len = result_path.length
@@ -122,8 +131,8 @@
         }
       }
     }
-    init(isabelle_path)
-    for (p <- isabelle_path.split("/")) {
+    val rest = init(isabelle_path)
+    for (p <- rest.split("/")) {
       if (p.startsWith("$")) append(getenv_strict(p.substring(1)))
       else if (p == "~") append(getenv_strict("HOME"))
       else if (p == "~~") append(getenv_strict("ISABELLE_HOME"))
--- a/src/Pure/System/platform.scala	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/System/platform.scala	Wed Apr 21 11:23:04 2010 +0200
@@ -19,37 +19,37 @@
   val is_windows = System.getProperty("os.name").startsWith("Windows")
 
 
-  /* Isabelle platform identifiers */
+  /* Platform identifiers */
 
   private val Solaris = new Regex("SunOS|Solaris")
   private val Linux = new Regex("Linux")
   private val Darwin = new Regex("Mac OS X")
-  private val Cygwin = new Regex("Windows.*")
+  private val Windows = new Regex("Windows.*")
 
   private val X86 = new Regex("i.86|x86")
   private val X86_64 = new Regex("amd64|x86_64")
   private val Sparc = new Regex("sparc")
   private val PPC = new Regex("PowerPC|ppc")
 
-  // main default, optional 64bit variant
-  val defaults: Option[(String, Option[String])] =
+  lazy val jvm_platform: String =
   {
-    (java.lang.System.getProperty("os.name") match {
-      case Solaris() => Some("solaris")
-      case Linux() => Some("linux")
-      case Darwin() => Some("darwin")
-      case Cygwin() => Some("cygwin")
-      case _ => None
-    }) match {
-      case Some(name) =>
-        java.lang.System.getProperty("os.arch") match {
-          case X86() => Some(("x86-" + name, None))
-          case X86_64() => Some(("x86-" + name, if (is_windows) None else Some("x86_64-" + name)))
-          case Sparc() => Some(("sparc-" + name, None))
-          case PPC() => Some(("ppc-" + name, None))
-        }
-      case None => None
-    }
+    val arch =
+      java.lang.System.getProperty("os.arch") match {
+        case X86() => "x86"
+        case X86_64() => "x86_64"
+        case Sparc() => "sparc"
+        case PPC() => "ppc"
+        case _ => error("Failed to determine CPU architecture")
+      }
+    val os =
+      java.lang.System.getProperty("os.name") match {
+        case Solaris() => "solaris"
+        case Linux() => "linux"
+        case Darwin() => "darwin"
+        case Windows() => "windows"
+        case _ => error("Failed to determine operating system platform")
+      }
+    arch + "-" + os
   }
 
 
--- a/src/Pure/System/standard_system.scala	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/System/standard_system.scala	Wed Apr 21 11:23:04 2010 +0200
@@ -162,6 +162,7 @@
   /* jvm_path */
 
   private val Cygdrive = new Regex("/cygdrive/([a-zA-Z])($|/.*)")
+  private val Named_Root = new Regex("//+([^/]*)(.*)")
 
   def jvm_path(posix_path: String): String =
     if (Platform.is_windows) {
@@ -171,6 +172,11 @@
           case Cygdrive(drive, rest) =>
             result_path ++= (drive + ":" + File.separator)
             rest
+          case Named_Root(root, rest) =>
+            result_path ++= File.separator
+            result_path ++= File.separator
+            result_path ++= root
+            rest
           case path if path.startsWith("/") =>
             result_path ++= platform_root
             path
@@ -205,4 +211,17 @@
       }
     }
     else jvm_path
+
+
+  /* this_java executable */
+
+  def this_java(): String =
+  {
+    val java_home = System.getProperty("java.home")
+    val java_exe =
+      if (Platform.is_windows) new File(java_home + "\\bin\\java.exe")
+      else new File(java_home + "/bin/java")
+    if (!java_exe.isFile) error("Expected this Java executable: " + java_exe.toString)
+    posix_path(java_exe.getAbsolutePath)
+  }
 }
--- a/src/Pure/Thy/thy_output.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/Thy/thy_output.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -599,7 +599,7 @@
 val _ = ml_text "ML" (fn txt => "fn _ => (" ^ txt ^ ");");
 val _ = ml_text "ML_type" (fn txt => "val _ = NONE : (" ^ txt ^ ") option;");
 val _ = ml_text "ML_struct" (fn txt => "functor XXX() = struct structure XX = " ^ txt ^ " end;");
-val _ = ml_text "ML_functor" (K "");  (*no check!*)
+val _ = ml_text "ML_functor" (fn txt => "ML_Env.check_functor " ^ ML_Syntax.print_string txt);
 val _ = ml_text "ML_text" (K "");
 
 end;
--- a/src/Pure/axclass.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/axclass.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -306,11 +306,11 @@
     |-> (fn const' as Const (c'', _) =>
       Thm.add_def false true
         (Binding.name (Thm.def_name c'), Logic.mk_equals (Const (c, T'), const'))
-      #>> Thm.varifyT_global
-      #-> (fn thm => add_inst_param (c, tyco) (c'', thm)
-      #> PureThy.add_thm ((Binding.conceal (Binding.name c'), thm), [])
-      #> snd
-      #> pair (Const (c, T))))
+      #>> apsnd Thm.varifyT_global
+      #-> (fn (_, thm) => add_inst_param (c, tyco) (c'', thm)
+        #> PureThy.add_thm ((Binding.conceal (Binding.name c'), thm), [])
+        #> snd
+        #> pair (Const (c, T))))
     ||> Sign.restore_naming thy
   end;
 
@@ -325,7 +325,7 @@
   in
     thy
     |> Thm.add_def false false (b', prop)
-    |>> (fn thm =>  Drule.transitive_thm OF [eq, thm])
+    |>> (fn (_, thm) =>  Drule.transitive_thm OF [eq, thm])
   end;
 
 
@@ -515,7 +515,7 @@
 
 fun add_axiom (b, prop) =
   Thm.add_axiom (b, prop) #->
-  (fn thm => PureThy.add_thm ((b, Drule.export_without_context thm), []));
+  (fn (_, thm) => PureThy.add_thm ((b, Drule.export_without_context thm), []));
 
 fun axiomatize prep mk name add raw_args thy =
   let
--- a/src/Pure/more_thm.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/more_thm.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -62,8 +62,8 @@
   val forall_intr_frees: thm -> thm
   val unvarify_global: thm -> thm
   val close_derivation: thm -> thm
-  val add_axiom: binding * term -> theory -> thm * theory
-  val add_def: bool -> bool -> binding * term -> theory -> thm * theory
+  val add_axiom: binding * term -> theory -> (string * thm) * theory
+  val add_def: bool -> bool -> binding * term -> theory -> (string * thm) * theory
   type binding = binding * attribute list
   val empty_binding: binding
   val rule_attribute: (Context.generic -> thm -> thm) -> attribute
@@ -347,31 +347,36 @@
 fun add_axiom (b, prop) thy =
   let
     val b' = if Binding.is_empty b then Binding.name ("unnamed_axiom_" ^ serial_string ()) else b;
+
     val _ = Sign.no_vars (Syntax.pp_global thy) prop;
     val (strip, recover, prop') = stripped_sorts thy prop;
     val constraints = map (fn (TFree (_, S), T) => (T, S)) strip;
     val of_sorts = maps (fn (T as TFree (_, S), _) => of_sort (Thm.ctyp_of thy T, S)) strip;
+
     val thy' =
       Theory.add_axiom (b', Logic.list_implies (maps Logic.mk_of_sort constraints, prop')) thy;
-    val axm' = Thm.axiom thy' (Sign.full_name thy' b');
+    val axm_name = Sign.full_name thy' b';
+    val axm' = Thm.axiom thy' axm_name;
     val thm =
       Thm.instantiate (recover, []) axm'
       |> unvarify_global
       |> fold elim_implies of_sorts;
-  in (thm, thy') end;
+  in ((axm_name, thm), thy') end;
 
 fun add_def unchecked overloaded (b, prop) thy =
   let
     val _ = Sign.no_vars (Syntax.pp_global thy) prop;
     val prems = map (Thm.cterm_of thy) (Logic.strip_imp_prems prop);
     val (_, recover, concl') = stripped_sorts thy (Logic.strip_imp_concl prop);
+
     val thy' = Theory.add_def unchecked overloaded (b, concl') thy;
-    val axm' = Thm.axiom thy' (Sign.full_name thy' b);
+    val axm_name = Sign.full_name thy' b;
+    val axm' = Thm.axiom thy' axm_name;
     val thm =
       Thm.instantiate (recover, []) axm'
       |> unvarify_global
       |> fold_rev Thm.implies_intr prems;
-  in (thm, thy') end;
+  in ((axm_name, thm), thy') end;
 
 
 
--- a/src/Pure/pure_thy.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/pure_thy.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -210,7 +210,7 @@
 fun add prep unchecked overloaded = fold_map (fn ((b, raw_prop), atts) => fn thy =>
   let
     val prop = prep thy (b, raw_prop);
-    val (def, thy') = Thm.add_def unchecked overloaded (b, prop) thy;
+    val ((_, def), thy') = Thm.add_def unchecked overloaded (b, prop) thy;
     val thm = def
       |> Thm.forall_intr_frees
       |> Thm.forall_elim_vars 0
--- a/src/Pure/sign.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/sign.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -72,8 +72,7 @@
   val add_defsort_i: sort -> theory -> theory
   val add_types: (binding * int * mixfix) list -> theory -> theory
   val add_nonterminals: binding list -> theory -> theory
-  val add_tyabbrs: (binding * string list * string * mixfix) list -> theory -> theory
-  val add_tyabbrs_i: (binding * string list * typ * mixfix) list -> theory -> theory
+  val add_type_abbrev: binding * string list * typ -> theory -> theory
   val add_syntax: (string * string * mixfix) list -> theory -> theory
   val add_syntax_i: (string * typ * mixfix) list -> theory -> theory
   val add_modesyntax: Syntax.mode -> (string * string * mixfix) list -> theory -> theory
@@ -346,42 +345,25 @@
 
 (* add type constructors *)
 
-val type_syntax = Syntax.mark_type oo full_name;
-
 fun add_types types thy = thy |> map_sign (fn (naming, syn, tsig, consts) =>
   let
-    val syn' =
-      Syntax.update_type_gram true Syntax.mode_default
-        (map (fn (a, n, mx) => (type_syntax thy a, Syntax.make_type n, mx)) types) syn;
-    val decls = map (fn (a, n, _) => (a, n)) types;
-    val tsig' = fold (Type.add_type naming) decls tsig;
+    fun type_syntax (b, n, mx) =
+      (Syntax.mark_type (Name_Space.full_name naming b), Syntax.make_type n, mx);
+    val syn' = Syntax.update_type_gram true Syntax.mode_default (map type_syntax types) syn;
+    val tsig' = fold (fn (a, n, _) => Type.add_type naming (a, n)) types tsig;
   in (naming, syn', tsig', consts) end);
 
 
 (* add nonterminals *)
 
-fun add_nonterminals ns thy = thy |> map_sign (fn (naming, syn, tsig, consts) =>
-  let
-    val tsig' = fold (Type.add_nonterminal naming) ns tsig;
-  in (naming, syn, tsig', consts) end);
+fun add_nonterminals ns = map_sign (fn (naming, syn, tsig, consts) =>
+  (naming, syn, fold (Type.add_nonterminal naming) ns tsig, consts));
 
 
 (* add type abbreviations *)
 
-fun gen_add_tyabbr parse_typ (b, vs, rhs, mx) thy =
-  thy |> map_sign (fn (naming, syn, tsig, consts) =>
-    let
-      val ctxt = ProofContext.init thy;
-      val syn' =
-        Syntax.update_type_gram true Syntax.mode_default
-          [(type_syntax thy b, Syntax.make_type (length vs), mx)] syn;
-      val abbr = (b, vs, certify_typ_mode Type.mode_syntax thy (parse_typ ctxt rhs))
-        handle ERROR msg => cat_error msg ("in type abbreviation " ^ quote (Binding.str_of b));
-      val tsig' = Type.add_abbrev naming abbr tsig;
-    in (naming, syn', tsig', consts) end);
-
-val add_tyabbrs = fold (gen_add_tyabbr Syntax.parse_typ);
-val add_tyabbrs_i = fold (gen_add_tyabbr (K I));
+fun add_type_abbrev abbr = map_sign (fn (naming, syn, tsig, consts) =>
+  (naming, syn, Type.add_abbrev naming abbr tsig, consts));
 
 
 (* modify syntax *)
--- a/src/Pure/sorts.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Pure/sorts.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -57,8 +57,8 @@
   val meet_sort_typ: algebra -> typ * sort -> typ -> typ   (*exception CLASS_ERROR*)
   val of_sort: algebra -> typ * sort -> bool
   val of_sort_derivation: algebra ->
-    {class_relation: 'a * class -> class -> 'a,
-     type_constructor: string -> ('a * class) list list -> class -> 'a,
+    {class_relation: typ -> 'a * class -> class -> 'a,
+     type_constructor: string * typ list -> ('a * class) list list -> class -> 'a,
      type_variable: typ -> ('a * class) list} ->
     typ * sort -> 'a list   (*exception CLASS_ERROR*)
   val classrel_derivation: algebra ->
@@ -335,15 +335,15 @@
 (* errors -- performance tuning via delayed message composition *)
 
 datatype class_error =
-  NoClassrel of class * class |
-  NoArity of string * class |
-  NoSubsort of sort * sort;
+  No_Classrel of class * class |
+  No_Arity of string * class |
+  No_Subsort of sort * sort;
 
-fun class_error pp (NoClassrel (c1, c2)) =
+fun class_error pp (No_Classrel (c1, c2)) =
       "No class relation " ^ Pretty.string_of_classrel pp [c1, c2]
-  | class_error pp (NoArity (a, c)) =
+  | class_error pp (No_Arity (a, c)) =
       "No type arity " ^ Pretty.string_of_arity pp (a, [], [c])
-  | class_error pp (NoSubsort (S1, S2)) =
+  | class_error pp (No_Subsort (S1, S2)) =
      "Cannot derive subsort relation " ^ Pretty.string_of_sort pp S1
        ^ " < " ^ Pretty.string_of_sort pp S2;
 
@@ -357,7 +357,7 @@
     val arities = arities_of algebra;
     fun dom c =
       (case AList.lookup (op =) (Symtab.lookup_list arities a) c of
-        NONE => raise CLASS_ERROR (NoArity (a, c))
+        NONE => raise CLASS_ERROR (No_Arity (a, c))
       | SOME (_, Ss) => Ss);
     fun dom_inter c Ss = ListPair.map (inter_sort algebra) (dom c, Ss);
   in
@@ -375,7 +375,7 @@
     fun meet _ [] = I
       | meet (TFree (_, S)) S' =
           if sort_le algebra (S, S') then I
-          else raise CLASS_ERROR (NoSubsort (S, S'))
+          else raise CLASS_ERROR (No_Subsort (S, S'))
       | meet (TVar (v, S)) S' =
           if sort_le algebra (S, S') then I
           else Vartab.map_default (v, S) (inters S')
@@ -410,25 +410,30 @@
   let
     val arities = arities_of algebra;
 
-    fun weaken S1 S2 = S2 |> map (fn c2 =>
-      (case S1 |> find_first (fn (_, c1) => class_le algebra (c1, c2)) of
-        SOME d1 => class_relation d1 c2
-      | NONE => raise CLASS_ERROR (NoSubsort (map #2 S1, S2))));
+    fun weaken T D1 S2 =
+      let val S1 = map snd D1 in
+        if S1 = S2 then map fst D1
+        else
+          S2 |> map (fn c2 =>
+            (case D1 |> find_first (fn (_, c1) => class_le algebra (c1, c2)) of
+              SOME d1 => class_relation T d1 c2
+            | NONE => raise CLASS_ERROR (No_Subsort (S1, S2))))
+      end;
 
-    fun derive _ [] = []
-      | derive (Type (a, Ts)) S =
+    fun derive (_, []) = []
+      | derive (T as Type (a, Us), S) =
           let
             val Ss = mg_domain algebra a S;
-            val dom = map2 (fn T => fn S => derive T S ~~ S) Ts Ss;
+            val dom = map2 (fn U => fn S => derive (U, S) ~~ S) Us Ss;
           in
             S |> map (fn c =>
               let
                 val (c0, Ss') = the (AList.lookup (op =) (Symtab.lookup_list arities a) c);
-                val dom' = map2 (fn d => fn S' => weaken d S' ~~ S') dom Ss';
-              in class_relation (type_constructor a dom' c0, c0) c end)
+                val dom' = map (fn ((U, d), S') => weaken U d S' ~~ S') ((Us ~~ dom) ~~ Ss');
+              in class_relation T (type_constructor (a, Us) dom' c0, c0) c end)
           end
-      | derive T S = weaken (type_variable T) S;
-  in uncurry derive end;
+      | derive (T, S) = weaken T (type_variable T) S;
+  in derive end;
 
 fun classrel_derivation algebra class_relation =
   let
@@ -437,7 +442,7 @@
   in
     fn (x, c1) => fn c2 =>
       (case Graph.irreducible_paths (classes_of algebra) (c1, c2) of
-        [] => raise CLASS_ERROR (NoClassrel (c1, c2))
+        [] => raise CLASS_ERROR (No_Classrel (c1, c2))
       | cs :: _ => path (x, cs))
   end;
 
--- a/src/Tools/Code/code_preproc.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Tools/Code/code_preproc.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -365,13 +365,13 @@
 fun dicts_of thy (proj_sort, algebra) (T, sort) =
   let
     fun class_relation (x, _) _ = x;
-    fun type_constructor tyco xs class =
+    fun type_constructor (tyco, _) xs class =
       inst_params thy tyco (Sorts.complete_sort algebra [class])
         @ (maps o maps) fst xs;
     fun type_variable (TFree (_, sort)) = map (pair []) (proj_sort sort);
   in
     flat (Sorts.of_sort_derivation algebra
-      { class_relation = class_relation, type_constructor = type_constructor,
+      { class_relation = K class_relation, type_constructor = type_constructor,
         type_variable = type_variable } (T, proj_sort sort)
        handle Sorts.CLASS_ERROR _ => [] (*permissive!*))
   end;
--- a/src/Tools/Code/code_target.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Tools/Code/code_target.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -165,8 +165,6 @@
 
 val abort_allowed = snd o fst o Targets.get;
 
-val the_default_width = snd o Targets.get;
-
 fun assert_target thy target = if Symtab.defined ((fst o fst) (Targets.get thy)) target
   then target
   else error ("Unknown code target language: " ^ quote target);
--- a/src/Tools/Code/code_thingol.ML	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Tools/Code/code_thingol.ML	Wed Apr 21 11:23:04 2010 +0200
@@ -655,10 +655,10 @@
             translate_app thy algbr eqngr some_abs some_thm ((c, ty), ts)
         | (t', ts) =>
             translate_term thy algbr eqngr some_abs some_thm t'
-            ##>> fold_map (translate_term thy algbr eqngr some_abs some_thm) ts
+            ##>> fold_map (translate_term thy algbr eqngr NONE some_thm) ts
             #>> (fn (t, ts) => t `$$ ts)
-and translate_eqn thy algbr eqngr ((some_abs, (args, rhs)), (some_thm, proper)) =
-  fold_map (translate_term thy algbr eqngr some_abs some_thm) args
+and translate_eqn thy algbr eqngr ((args, (rhs, some_abs)), (some_thm, proper)) =
+  fold_map (fn (arg, some_abs) => translate_term thy algbr eqngr some_abs some_thm arg) args
   ##>> translate_term thy algbr eqngr some_abs some_thm rhs
   #>> rpair (some_thm, proper)
 and translate_const thy algbr eqngr some_abs some_thm (c, ty) =
@@ -680,7 +680,7 @@
   end
 and translate_app_const thy algbr eqngr some_abs some_thm (c_ty, ts) =
   translate_const thy algbr eqngr some_abs some_thm c_ty
-  ##>> fold_map (translate_term thy algbr eqngr some_abs some_thm) ts
+  ##>> fold_map (translate_term thy algbr eqngr NONE some_thm) ts
   #>> (fn (t, ts) => t `$$ ts)
 and translate_case thy algbr eqngr some_abs some_thm (num_args, (t_pos, case_pats)) (c_ty, ts) =
   let
@@ -753,7 +753,7 @@
     translate_case thy algbr eqngr some_abs some_thm case_scheme ((c, ty), ts)
 and translate_app thy algbr eqngr some_abs some_thm (c_ty_ts as ((c, _), _)) =
   case Code.get_case_scheme thy c
-   of SOME case_scheme => translate_app_case thy algbr eqngr some_abs some_thm case_scheme c_ty_ts
+   of SOME case_scheme => translate_app_case thy algbr eqngr NONE some_thm case_scheme c_ty_ts
     | NONE => translate_app_const thy algbr eqngr some_abs some_thm c_ty_ts
 and translate_tyvar_sort thy (algbr as (proj_sort, _)) eqngr (v, sort) =
   fold_map (ensure_class thy algbr eqngr) (proj_sort sort)
@@ -767,14 +767,14 @@
           Global ((class, tyco), yss)
       | class_relation (Local (classrels, v), subclass) superclass =
           Local ((subclass, superclass) :: classrels, v);
-    fun type_constructor tyco yss class =
+    fun type_constructor (tyco, _) yss class =
       Global ((class, tyco), (map o map) fst yss);
     fun type_variable (TFree (v, sort)) =
       let
         val sort' = proj_sort sort;
       in map_index (fn (n, class) => (Local ([], (v, (n, sort'))), class)) sort' end;
     val typargs = Sorts.of_sort_derivation algebra
-      {class_relation = Sorts.classrel_derivation algebra class_relation,
+      {class_relation = K (Sorts.classrel_derivation algebra class_relation),
        type_constructor = type_constructor,
        type_variable = type_variable} (ty, proj_sort sort)
       handle Sorts.CLASS_ERROR e => not_wellsorted thy some_thm ty sort e;
--- a/src/Tools/jEdit/README_BUILD	Wed Apr 21 10:44:44 2010 +0200
+++ b/src/Tools/jEdit/README_BUILD	Wed Apr 21 11:23:04 2010 +0200
@@ -15,7 +15,6 @@
 
 * jEdit 4.3.1 (final)
   http://www.jedit.org/
-
   Netbeans Project "jEdit": install official sources as ./contrib/jEdit/.
 
 * jEdit plugins: