merged
authorwenzelm
Fri, 18 Mar 2016 20:29:50 +0100
changeset 62670 006c057814f1
parent 62653 d3a5b127eb81 (current diff)
parent 62669 c95b76681e65 (diff)
child 62671 a9ee1f240b81
merged
NEWS
src/Pure/General/secure.ML
src/Pure/General/sha1_polyml.ML
src/Pure/General/sha1_samples.ML
src/Pure/ML/install_pp_polyml.ML
--- a/NEWS	Fri Mar 18 12:54:20 2016 +0100
+++ b/NEWS	Fri Mar 18 20:29:50 2016 +0100
@@ -220,6 +220,9 @@
 
 *** ML ***
 
+* Antiquotation @{make_string} is available during Pure bootstrap --
+with approximative output quality.
+
 * Option ML_exception_debugger controls detailed exception trace via the
 Poly/ML debugger. Relevant ML modules need to be compiled beforehand
 with ML_file_debug, or with ML_file and option ML_debugger enabled. Note
--- a/src/Doc/Isar_Ref/Generic.thy	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Doc/Isar_Ref/Generic.thy	Fri Mar 18 20:29:50 2016 +0100
@@ -247,24 +247,23 @@
 
 section \<open>The Simplifier \label{sec:simplifier}\<close>
 
-text \<open>The Simplifier performs conditional and unconditional
-  rewriting and uses contextual information: rule declarations in the
-  background theory or local proof context are taken into account, as
-  well as chained facts and subgoal premises (``local assumptions'').
-  There are several general hooks that allow to modify the
-  simplification strategy, or incorporate other proof tools that solve
-  sub-problems, produce rewrite rules on demand etc.
+text \<open>
+  The Simplifier performs conditional and unconditional rewriting and uses
+  contextual information: rule declarations in the background theory or local
+  proof context are taken into account, as well as chained facts and subgoal
+  premises (``local assumptions''). There are several general hooks that allow
+  to modify the simplification strategy, or incorporate other proof tools that
+  solve sub-problems, produce rewrite rules on demand etc.
 
-  The rewriting strategy is always strictly bottom up, except for
-  congruence rules, which are applied while descending into a term.
-  Conditions in conditional rewrite rules are solved recursively
-  before the rewrite rule is applied.
+  The rewriting strategy is always strictly bottom up, except for congruence
+  rules, which are applied while descending into a term. Conditions in
+  conditional rewrite rules are solved recursively before the rewrite rule is
+  applied.
 
-  The default Simplifier setup of major object logics (HOL, HOLCF,
-  FOL, ZF) makes the Simplifier ready for immediate use, without
-  engaging into the internal structures.  Thus it serves as
-  general-purpose proof tool with the main focus on equational
-  reasoning, and a bit more than that.
+  The default Simplifier setup of major object logics (HOL, HOLCF, FOL, ZF)
+  makes the Simplifier ready for immediate use, without engaging into the
+  internal structures. Thus it serves as general-purpose proof tool with the
+  main focus on equational reasoning, and a bit more than that.
 \<close>
 
 
@@ -288,69 +287,63 @@
       'cong' (() | 'add' | 'del')) ':' @{syntax thmrefs}
   \<close>}
 
-  \<^descr> @{method simp} invokes the Simplifier on the first subgoal,
-  after inserting chained facts as additional goal premises; further
-  rule declarations may be included via \<open>(simp add: facts)\<close>.
-  The proof method fails if the subgoal remains unchanged after
-  simplification.
+  \<^descr> @{method simp} invokes the Simplifier on the first subgoal, after
+  inserting chained facts as additional goal premises; further rule
+  declarations may be included via \<open>(simp add: facts)\<close>. The proof method fails
+  if the subgoal remains unchanged after simplification.
 
-  Note that the original goal premises and chained facts are subject
-  to simplification themselves, while declarations via \<open>add\<close>/\<open>del\<close> merely follow the policies of the object-logic
-  to extract rewrite rules from theorems, without further
-  simplification.  This may lead to slightly different behavior in
-  either case, which might be required precisely like that in some
-  boundary situations to perform the intended simplification step!
+  Note that the original goal premises and chained facts are subject to
+  simplification themselves, while declarations via \<open>add\<close>/\<open>del\<close> merely follow
+  the policies of the object-logic to extract rewrite rules from theorems,
+  without further simplification. This may lead to slightly different behavior
+  in either case, which might be required precisely like that in some boundary
+  situations to perform the intended simplification step!
 
   \<^medskip>
-  The \<open>only\<close> modifier first removes all other rewrite
-  rules, looper tactics (including split rules), congruence rules, and
-  then behaves like \<open>add\<close>.  Implicit solvers remain, which means
-  that trivial rules like reflexivity or introduction of \<open>True\<close> are available to solve the simplified subgoals, but also
-  non-trivial tools like linear arithmetic in HOL.  The latter may
-  lead to some surprise of the meaning of ``only'' in Isabelle/HOL
-  compared to English!
+  The \<open>only\<close> modifier first removes all other rewrite rules, looper tactics
+  (including split rules), congruence rules, and then behaves like \<open>add\<close>.
+  Implicit solvers remain, which means that trivial rules like reflexivity or
+  introduction of \<open>True\<close> are available to solve the simplified subgoals, but
+  also non-trivial tools like linear arithmetic in HOL. The latter may lead to
+  some surprise of the meaning of ``only'' in Isabelle/HOL compared to
+  English!
 
   \<^medskip>
-  The \<open>split\<close> modifiers add or delete rules for the
-  Splitter (see also \secref{sec:simp-strategies} on the looper).
-  This works only if the Simplifier method has been properly setup to
-  include the Splitter (all major object logics such HOL, HOLCF, FOL,
-  ZF do this already).
+  The \<open>split\<close> modifiers add or delete rules for the Splitter (see also
+  \secref{sec:simp-strategies} on the looper). This works only if the
+  Simplifier method has been properly setup to include the Splitter (all major
+  object logics such HOL, HOLCF, FOL, ZF do this already).
 
   There is also a separate @{method_ref split} method available for
-  single-step case splitting.  The effect of repeatedly applying
-  \<open>(split thms)\<close> can be imitated by ``\<open>(simp only:
-  split: thms)\<close>''.
+  single-step case splitting. The effect of repeatedly applying \<open>(split thms)\<close>
+  can be imitated by ``\<open>(simp only: split: thms)\<close>''.
 
   \<^medskip>
-  The \<open>cong\<close> modifiers add or delete Simplifier
-  congruence rules (see also \secref{sec:simp-rules}); the default is
-  to add.
+  The \<open>cong\<close> modifiers add or delete Simplifier congruence rules (see also
+  \secref{sec:simp-rules}); the default is to add.
 
-  \<^descr> @{method simp_all} is similar to @{method simp}, but acts on
-  all goals, working backwards from the last to the first one as usual
-  in Isabelle.\<^footnote>\<open>The order is irrelevant for goals without
-  schematic variables, so simplification might actually be performed
-  in parallel here.\<close>
+  \<^descr> @{method simp_all} is similar to @{method simp}, but acts on all goals,
+  working backwards from the last to the first one as usual in Isabelle.\<^footnote>\<open>The
+  order is irrelevant for goals without schematic variables, so simplification
+  might actually be performed in parallel here.\<close>
 
-  Chained facts are inserted into all subgoals, before the
-  simplification process starts.  Further rule declarations are the
-  same as for @{method simp}.
+  Chained facts are inserted into all subgoals, before the simplification
+  process starts. Further rule declarations are the same as for @{method
+  simp}.
 
   The proof method fails if all subgoals remain unchanged after
   simplification.
 
-  \<^descr> @{attribute simp_depth_limit} limits the number of recursive
-  invocations of the Simplifier during conditional rewriting.
+  \<^descr> @{attribute simp_depth_limit} limits the number of recursive invocations
+  of the Simplifier during conditional rewriting.
 
 
-  By default the Simplifier methods above take local assumptions fully
-  into account, using equational assumptions in the subsequent
-  normalization process, or simplifying assumptions themselves.
-  Further options allow to fine-tune the behavior of the Simplifier
-  in this respect, corresponding to a variety of ML tactics as
-  follows.\<^footnote>\<open>Unlike the corresponding Isar proof methods, the
-  ML tactics do not insist in changing the goal state.\<close>
+  By default the Simplifier methods above take local assumptions fully into
+  account, using equational assumptions in the subsequent normalization
+  process, or simplifying assumptions themselves. Further options allow to
+  fine-tune the behavior of the Simplifier in this respect, corresponding to a
+  variety of ML tactics as follows.\<^footnote>\<open>Unlike the corresponding Isar proof
+  methods, the ML tactics do not insist in changing the goal state.\<close>
 
   \begin{center}
   \small
@@ -358,24 +351,21 @@
   \hline
   Isar method & ML tactic & behavior \\\hline
 
-  \<open>(simp (no_asm))\<close> & @{ML simp_tac} & assumptions are ignored
-  completely \\\hline
+  \<open>(simp (no_asm))\<close> & @{ML simp_tac} & assumptions are ignored completely
+  \\\hline
 
-  \<open>(simp (no_asm_simp))\<close> & @{ML asm_simp_tac} & assumptions
-  are used in the simplification of the conclusion but are not
-  themselves simplified \\\hline
+  \<open>(simp (no_asm_simp))\<close> & @{ML asm_simp_tac} & assumptions are used in the
+  simplification of the conclusion but are not themselves simplified \\\hline
 
-  \<open>(simp (no_asm_use))\<close> & @{ML full_simp_tac} & assumptions
-  are simplified but are not used in the simplification of each other
-  or the conclusion \\\hline
+  \<open>(simp (no_asm_use))\<close> & @{ML full_simp_tac} & assumptions are simplified but
+  are not used in the simplification of each other or the conclusion \\\hline
 
-  \<open>(simp)\<close> & @{ML asm_full_simp_tac} & assumptions are used in
-  the simplification of the conclusion and to simplify other
-  assumptions \\\hline
+  \<open>(simp)\<close> & @{ML asm_full_simp_tac} & assumptions are used in the
+  simplification of the conclusion and to simplify other assumptions \\\hline
 
-  \<open>(simp (asm_lr))\<close> & @{ML asm_lr_simp_tac} & compatibility
-  mode: an assumption is only used for simplifying assumptions which
-  are to the right of it \\\hline
+  \<open>(simp (asm_lr))\<close> & @{ML asm_lr_simp_tac} & compatibility mode: an
+  assumption is only used for simplifying assumptions which are to the right
+  of it \\\hline
 
   \end{tabular}
   \end{center}
@@ -384,18 +374,19 @@
 
 subsubsection \<open>Examples\<close>
 
-text \<open>We consider basic algebraic simplifications in Isabelle/HOL.
-  The rather trivial goal @{prop "0 + (x + 0) = x + 0 + 0"} looks like
-  a good candidate to be solved by a single call of @{method simp}:
+text \<open>
+  We consider basic algebraic simplifications in Isabelle/HOL. The rather
+  trivial goal @{prop "0 + (x + 0) = x + 0 + 0"} looks like a good candidate
+  to be solved by a single call of @{method simp}:
 \<close>
 
 lemma "0 + (x + 0) = x + 0 + 0" apply simp? oops
 
-text \<open>The above attempt \<^emph>\<open>fails\<close>, because @{term "0"} and @{term
-  "op +"} in the HOL library are declared as generic type class
-  operations, without stating any algebraic laws yet.  More specific
-  types are required to get access to certain standard simplifications
-  of the theory context, e.g.\ like this:\<close>
+text \<open>
+  The above attempt \<^emph>\<open>fails\<close>, because @{term "0"} and @{term "op +"} in the
+  HOL library are declared as generic type class operations, without stating
+  any algebraic laws yet. More specific types are required to get access to
+  certain standard simplifications of the theory context, e.g.\ like this:\<close>
 
 lemma fixes x :: nat shows "0 + (x + 0) = x + 0 + 0" by simp
 lemma fixes x :: int shows "0 + (x + 0) = x + 0 + 0" by simp
@@ -403,39 +394,38 @@
 
 text \<open>
   \<^medskip>
-  In many cases, assumptions of a subgoal are also needed in
-  the simplification process.  For example:
+  In many cases, assumptions of a subgoal are also needed in the
+  simplification process. For example:
 \<close>
 
 lemma fixes x :: nat shows "x = 0 \<Longrightarrow> x + x = 0" by simp
 lemma fixes x :: nat assumes "x = 0" shows "x + x = 0" apply simp oops
 lemma fixes x :: nat assumes "x = 0" shows "x + x = 0" using assms by simp
 
-text \<open>As seen above, local assumptions that shall contribute to
-  simplification need to be part of the subgoal already, or indicated
-  explicitly for use by the subsequent method invocation.  Both too
-  little or too much information can make simplification fail, for
-  different reasons.
+text \<open>
+  As seen above, local assumptions that shall contribute to simplification
+  need to be part of the subgoal already, or indicated explicitly for use by
+  the subsequent method invocation. Both too little or too much information
+  can make simplification fail, for different reasons.
 
-  In the next example the malicious assumption @{prop "\<And>x::nat. f x =
-  g (f (g x))"} does not contribute to solve the problem, but makes
-  the default @{method simp} method loop: the rewrite rule \<open>f
-  ?x \<equiv> g (f (g ?x))\<close> extracted from the assumption does not
-  terminate.  The Simplifier notices certain simple forms of
-  nontermination, but not this one.  The problem can be solved
-  nonetheless, by ignoring assumptions via special options as
-  explained before:
+  In the next example the malicious assumption @{prop "\<And>x::nat. f x = g (f (g
+  x))"} does not contribute to solve the problem, but makes the default
+  @{method simp} method loop: the rewrite rule \<open>f ?x \<equiv> g (f (g ?x))\<close> extracted
+  from the assumption does not terminate. The Simplifier notices certain
+  simple forms of nontermination, but not this one. The problem can be solved
+  nonetheless, by ignoring assumptions via special options as explained
+  before:
 \<close>
 
 lemma "(\<And>x::nat. f x = g (f (g x))) \<Longrightarrow> f 0 = f 0 + 0"
   by (simp (no_asm))
 
-text \<open>The latter form is typical for long unstructured proof
-  scripts, where the control over the goal content is limited.  In
-  structured proofs it is usually better to avoid pushing too many
-  facts into the goal state in the first place.  Assumptions in the
-  Isar proof context do not intrude the reasoning if not used
-  explicitly.  This is illustrated for a toplevel statement and a
+text \<open>
+  The latter form is typical for long unstructured proof scripts, where the
+  control over the goal content is limited. In structured proofs it is usually
+  better to avoid pushing too many facts into the goal state in the first
+  place. Assumptions in the Isar proof context do not intrude the reasoning if
+  not used explicitly. This is illustrated for a toplevel statement and a
   local proof body as follows:
 \<close>
 
@@ -451,26 +441,26 @@
 
 text \<open>
   \<^medskip>
-  Because assumptions may simplify each other, there
-  can be very subtle cases of nontermination. For example, the regular
-  @{method simp} method applied to @{prop "P (f x) \<Longrightarrow> y = x \<Longrightarrow> f x = f y
-  \<Longrightarrow> Q"} gives rise to the infinite reduction sequence
+  Because assumptions may simplify each other, there can be very subtle cases
+  of nontermination. For example, the regular @{method simp} method applied to
+  @{prop "P (f x) \<Longrightarrow> y = x \<Longrightarrow> f x = f y \<Longrightarrow> Q"} gives rise to the infinite
+  reduction sequence
   \[
   \<open>P (f x)\<close> \stackrel{\<open>f x \<equiv> f y\<close>}{\longmapsto}
   \<open>P (f y)\<close> \stackrel{\<open>y \<equiv> x\<close>}{\longmapsto}
   \<open>P (f x)\<close> \stackrel{\<open>f x \<equiv> f y\<close>}{\longmapsto} \cdots
   \]
-  whereas applying the same to @{prop "y = x \<Longrightarrow> f x = f y \<Longrightarrow> P (f x) \<Longrightarrow>
-  Q"} terminates (without solving the goal):
+  whereas applying the same to @{prop "y = x \<Longrightarrow> f x = f y \<Longrightarrow> P (f x) \<Longrightarrow> Q"}
+  terminates (without solving the goal):
 \<close>
 
 lemma "y = x \<Longrightarrow> f x = f y \<Longrightarrow> P (f x) \<Longrightarrow> Q"
   apply simp
   oops
 
-text \<open>See also \secref{sec:simp-trace} for options to enable
-  Simplifier trace mode, which often helps to diagnose problems with
-  rewrite systems.
+text \<open>
+  See also \secref{sec:simp-trace} for options to enable Simplifier trace
+  mode, which often helps to diagnose problems with rewrite systems.
 \<close>
 
 
@@ -491,200 +481,180 @@
     @@{command print_simpset} ('!'?)
   \<close>}
 
-  \<^descr> @{attribute simp} declares rewrite rules, by adding or
-  deleting them from the simpset within the theory or proof context.
-  Rewrite rules are theorems expressing some form of equality, for
-  example:
+  \<^descr> @{attribute simp} declares rewrite rules, by adding or deleting them from
+  the simpset within the theory or proof context. Rewrite rules are theorems
+  expressing some form of equality, for example:
 
   \<open>Suc ?m + ?n = ?m + Suc ?n\<close> \\
   \<open>?P \<and> ?P \<longleftrightarrow> ?P\<close> \\
   \<open>?A \<union> ?B \<equiv> {x. x \<in> ?A \<or> x \<in> ?B}\<close>
 
   \<^medskip>
-  Conditional rewrites such as \<open>?m < ?n \<Longrightarrow> ?m div ?n = 0\<close> are
-  also permitted; the conditions can be arbitrary formulas.
+  Conditional rewrites such as \<open>?m < ?n \<Longrightarrow> ?m div ?n = 0\<close> are also permitted;
+  the conditions can be arbitrary formulas.
 
   \<^medskip>
-  Internally, all rewrite rules are translated into Pure
-  equalities, theorems with conclusion \<open>lhs \<equiv> rhs\<close>. The
-  simpset contains a function for extracting equalities from arbitrary
-  theorems, which is usually installed when the object-logic is
-  configured initially. For example, \<open>\<not> ?x \<in> {}\<close> could be
-  turned into \<open>?x \<in> {} \<equiv> False\<close>. Theorems that are declared as
-  @{attribute simp} and local assumptions within a goal are treated
-  uniformly in this respect.
+  Internally, all rewrite rules are translated into Pure equalities, theorems
+  with conclusion \<open>lhs \<equiv> rhs\<close>. The simpset contains a function for extracting
+  equalities from arbitrary theorems, which is usually installed when the
+  object-logic is configured initially. For example, \<open>\<not> ?x \<in> {}\<close> could be
+  turned into \<open>?x \<in> {} \<equiv> False\<close>. Theorems that are declared as @{attribute
+  simp} and local assumptions within a goal are treated uniformly in this
+  respect.
 
-  The Simplifier accepts the following formats for the \<open>lhs\<close>
-  term:
+  The Simplifier accepts the following formats for the \<open>lhs\<close> term:
 
-    \<^enum> First-order patterns, considering the sublanguage of
-    application of constant operators to variable operands, without
-    \<open>\<lambda>\<close>-abstractions or functional variables.
-    For example:
+    \<^enum> First-order patterns, considering the sublanguage of application of
+    constant operators to variable operands, without \<open>\<lambda>\<close>-abstractions or
+    functional variables. For example:
 
     \<open>(?x + ?y) + ?z \<equiv> ?x + (?y + ?z)\<close> \\
     \<open>f (f ?x ?y) ?z \<equiv> f ?x (f ?y ?z)\<close>
 
-    \<^enum> Higher-order patterns in the sense of @{cite "nipkow-patterns"}.
-    These are terms in \<open>\<beta>\<close>-normal form (this will always be the
-    case unless you have done something strange) where each occurrence
-    of an unknown is of the form \<open>?F x\<^sub>1 \<dots> x\<^sub>n\<close>, where the
-    \<open>x\<^sub>i\<close> are distinct bound variables.
+    \<^enum> Higher-order patterns in the sense of @{cite "nipkow-patterns"}. These
+    are terms in \<open>\<beta>\<close>-normal form (this will always be the case unless you have
+    done something strange) where each occurrence of an unknown is of the form
+    \<open>?F x\<^sub>1 \<dots> x\<^sub>n\<close>, where the \<open>x\<^sub>i\<close> are distinct bound variables.
 
-    For example, \<open>(\<forall>x. ?P x \<and> ?Q x) \<equiv> (\<forall>x. ?P x) \<and> (\<forall>x. ?Q x)\<close>
-    or its symmetric form, since the \<open>rhs\<close> is also a
-    higher-order pattern.
+    For example, \<open>(\<forall>x. ?P x \<and> ?Q x) \<equiv> (\<forall>x. ?P x) \<and> (\<forall>x. ?Q x)\<close> or its
+    symmetric form, since the \<open>rhs\<close> is also a higher-order pattern.
 
-    \<^enum> Physical first-order patterns over raw \<open>\<lambda>\<close>-term
-    structure without \<open>\<alpha>\<beta>\<eta>\<close>-equality; abstractions and bound
-    variables are treated like quasi-constant term material.
+    \<^enum> Physical first-order patterns over raw \<open>\<lambda>\<close>-term structure without
+    \<open>\<alpha>\<beta>\<eta>\<close>-equality; abstractions and bound variables are treated like
+    quasi-constant term material.
 
-    For example, the rule \<open>?f ?x \<in> range ?f = True\<close> rewrites the
-    term \<open>g a \<in> range g\<close> to \<open>True\<close>, but will fail to
-    match \<open>g (h b) \<in> range (\<lambda>x. g (h x))\<close>. However, offending
-    subterms (in our case \<open>?f ?x\<close>, which is not a pattern) can
-    be replaced by adding new variables and conditions like this: \<open>?y = ?f ?x \<Longrightarrow> ?y \<in> range ?f = True\<close> is acceptable as a conditional
-    rewrite rule of the second category since conditions can be
-    arbitrary terms.
+    For example, the rule \<open>?f ?x \<in> range ?f = True\<close> rewrites the term \<open>g a \<in>
+    range g\<close> to \<open>True\<close>, but will fail to match \<open>g (h b) \<in> range (\<lambda>x. g (h
+    x))\<close>. However, offending subterms (in our case \<open>?f ?x\<close>, which is not a
+    pattern) can be replaced by adding new variables and conditions like this:
+    \<open>?y = ?f ?x \<Longrightarrow> ?y \<in> range ?f = True\<close> is acceptable as a conditional rewrite
+    rule of the second category since conditions can be arbitrary terms.
 
   \<^descr> @{attribute split} declares case split rules.
 
-  \<^descr> @{attribute cong} declares congruence rules to the Simplifier
-  context.
+  \<^descr> @{attribute cong} declares congruence rules to the Simplifier context.
 
   Congruence rules are equalities of the form @{text [display]
   "\<dots> \<Longrightarrow> f ?x\<^sub>1 \<dots> ?x\<^sub>n = f ?y\<^sub>1 \<dots> ?y\<^sub>n"}
 
-  This controls the simplification of the arguments of \<open>f\<close>.  For
-  example, some arguments can be simplified under additional
-  assumptions: @{text [display] "?P\<^sub>1 \<longleftrightarrow> ?Q\<^sub>1 \<Longrightarrow> (?Q\<^sub>1 \<Longrightarrow> ?P\<^sub>2 \<longleftrightarrow> ?Q\<^sub>2) \<Longrightarrow>
-  (?P\<^sub>1 \<longrightarrow> ?P\<^sub>2) \<longleftrightarrow> (?Q\<^sub>1 \<longrightarrow> ?Q\<^sub>2)"}
+  This controls the simplification of the arguments of \<open>f\<close>. For example, some
+  arguments can be simplified under additional assumptions:
+  @{text [display]
+    "?P\<^sub>1 \<longleftrightarrow> ?Q\<^sub>1 \<Longrightarrow>
+    (?Q\<^sub>1 \<Longrightarrow> ?P\<^sub>2 \<longleftrightarrow> ?Q\<^sub>2) \<Longrightarrow>
+    (?P\<^sub>1 \<longrightarrow> ?P\<^sub>2) \<longleftrightarrow> (?Q\<^sub>1 \<longrightarrow> ?Q\<^sub>2)"}
 
-  Given this rule, the Simplifier assumes \<open>?Q\<^sub>1\<close> and extracts
-  rewrite rules from it when simplifying \<open>?P\<^sub>2\<close>.  Such local
-  assumptions are effective for rewriting formulae such as \<open>x =
-  0 \<longrightarrow> y + x = y\<close>.
+  Given this rule, the Simplifier assumes \<open>?Q\<^sub>1\<close> and extracts rewrite rules
+  from it when simplifying \<open>?P\<^sub>2\<close>. Such local assumptions are effective for
+  rewriting formulae such as \<open>x = 0 \<longrightarrow> y + x = y\<close>.
 
   %FIXME
   %The local assumptions are also provided as theorems to the solver;
   %see \secref{sec:simp-solver} below.
 
   \<^medskip>
-  The following congruence rule for bounded quantifiers also
-  supplies contextual information --- about the bound variable:
-  @{text [display] "(?A = ?B) \<Longrightarrow> (\<And>x. x \<in> ?B \<Longrightarrow> ?P x \<longleftrightarrow> ?Q x) \<Longrightarrow>
+  The following congruence rule for bounded quantifiers also supplies
+  contextual information --- about the bound variable: @{text [display]
+  "(?A = ?B) \<Longrightarrow>
+    (\<And>x. x \<in> ?B \<Longrightarrow> ?P x \<longleftrightarrow> ?Q x) \<Longrightarrow>
     (\<forall>x \<in> ?A. ?P x) \<longleftrightarrow> (\<forall>x \<in> ?B. ?Q x)"}
 
   \<^medskip>
-  This congruence rule for conditional expressions can
-  supply contextual information for simplifying the arms:
-  @{text [display] "?p = ?q \<Longrightarrow> (?q \<Longrightarrow> ?a = ?c) \<Longrightarrow> (\<not> ?q \<Longrightarrow> ?b = ?d) \<Longrightarrow>
+  This congruence rule for conditional expressions can supply contextual
+  information for simplifying the arms: @{text [display]
+  "?p = ?q \<Longrightarrow>
+    (?q \<Longrightarrow> ?a = ?c) \<Longrightarrow>
+    (\<not> ?q \<Longrightarrow> ?b = ?d) \<Longrightarrow>
     (if ?p then ?a else ?b) = (if ?q then ?c else ?d)"}
 
-  A congruence rule can also \<^emph>\<open>prevent\<close> simplification of some
-  arguments.  Here is an alternative congruence rule for conditional
-  expressions that conforms to non-strict functional evaluation:
-  @{text [display] "?p = ?q \<Longrightarrow> (if ?p then ?a else ?b) = (if ?q then ?a else ?b)"}
+  A congruence rule can also \<^emph>\<open>prevent\<close> simplification of some arguments. Here
+  is an alternative congruence rule for conditional expressions that conforms
+  to non-strict functional evaluation: @{text [display]
+  "?p = ?q \<Longrightarrow>
+    (if ?p then ?a else ?b) = (if ?q then ?a else ?b)"}
 
-  Only the first argument is simplified; the others remain unchanged.
-  This can make simplification much faster, but may require an extra
-  case split over the condition \<open>?q\<close> to prove the goal.
-
-  \<^descr> @{command "print_simpset"} prints the collection of rules declared
-  to the Simplifier, which is also known as ``simpset'' internally; the
-  ``\<open>!\<close>'' option indicates extra verbosity.
+  Only the first argument is simplified; the others remain unchanged. This can
+  make simplification much faster, but may require an extra case split over
+  the condition \<open>?q\<close> to prove the goal.
 
-  For historical reasons, simpsets may occur independently from the
-  current context, but are conceptually dependent on it.  When the
-  Simplifier is invoked via one of its main entry points in the Isar
-  source language (as proof method \secref{sec:simp-meth} or rule
-  attribute \secref{sec:simp-meth}), its simpset is derived from the
-  current proof context, and carries a back-reference to that for
-  other tools that might get invoked internally (e.g.\ simplification
-  procedures \secref{sec:simproc}).  A mismatch of the context of the
-  simpset and the context of the problem being simplified may lead to
-  unexpected results.
+  \<^descr> @{command "print_simpset"} prints the collection of rules declared to the
+  Simplifier, which is also known as ``simpset'' internally; the ``\<open>!\<close>''
+  option indicates extra verbosity.
 
-
-  The implicit simpset of the theory context is propagated
-  monotonically through the theory hierarchy: forming a new theory,
-  the union of the simpsets of its imports are taken as starting
-  point.  Also note that definitional packages like @{command
-  "datatype"}, @{command "primrec"}, @{command "fun"} routinely
-  declare Simplifier rules to the target context, while plain
-  @{command "definition"} is an exception in \<^emph>\<open>not\<close> declaring
+  The implicit simpset of the theory context is propagated monotonically
+  through the theory hierarchy: forming a new theory, the union of the
+  simpsets of its imports are taken as starting point. Also note that
+  definitional packages like @{command "datatype"}, @{command "primrec"},
+  @{command "fun"} routinely declare Simplifier rules to the target context,
+  while plain @{command "definition"} is an exception in \<^emph>\<open>not\<close> declaring
   anything.
 
   \<^medskip>
-  It is up the user to manipulate the current simpset further
-  by explicitly adding or deleting theorems as simplification rules,
-  or installing other tools via simplification procedures
-  (\secref{sec:simproc}).  Good simpsets are hard to design.  Rules
-  that obviously simplify, like \<open>?n + 0 \<equiv> ?n\<close> are good
-  candidates for the implicit simpset, unless a special
-  non-normalizing behavior of certain operations is intended.  More
-  specific rules (such as distributive laws, which duplicate subterms)
-  should be added only for specific proof steps.  Conversely,
-  sometimes a rule needs to be deleted just for some part of a proof.
-  The need of frequent additions or deletions may indicate a poorly
-  designed simpset.
+  It is up the user to manipulate the current simpset further by explicitly
+  adding or deleting theorems as simplification rules, or installing other
+  tools via simplification procedures (\secref{sec:simproc}). Good simpsets
+  are hard to design. Rules that obviously simplify, like \<open>?n + 0 \<equiv> ?n\<close> are
+  good candidates for the implicit simpset, unless a special non-normalizing
+  behavior of certain operations is intended. More specific rules (such as
+  distributive laws, which duplicate subterms) should be added only for
+  specific proof steps. Conversely, sometimes a rule needs to be deleted just
+  for some part of a proof. The need of frequent additions or deletions may
+  indicate a poorly designed simpset.
 
   \begin{warn}
-  The union of simpsets from theory imports (as described above) is
-  not always a good starting point for the new theory.  If some
-  ancestors have deleted simplification rules because they are no
-  longer wanted, while others have left those rules in, then the union
-  will contain the unwanted rules, and thus have to be deleted again
-  in the theory body.
+  The union of simpsets from theory imports (as described above) is not always
+  a good starting point for the new theory. If some ancestors have deleted
+  simplification rules because they are no longer wanted, while others have
+  left those rules in, then the union will contain the unwanted rules, and
+  thus have to be deleted again in the theory body.
   \end{warn}
 \<close>
 
 
 subsection \<open>Ordered rewriting with permutative rules\<close>
 
-text \<open>A rewrite rule is \<^emph>\<open>permutative\<close> if the left-hand side and
-  right-hand side are the equal up to renaming of variables.  The most
-  common permutative rule is commutativity: \<open>?x + ?y = ?y +
-  ?x\<close>.  Other examples include \<open>(?x - ?y) - ?z = (?x - ?z) -
-  ?y\<close> in arithmetic and \<open>insert ?x (insert ?y ?A) = insert ?y
-  (insert ?x ?A)\<close> for sets.  Such rules are common enough to merit
-  special attention.
+text \<open>
+  A rewrite rule is \<^emph>\<open>permutative\<close> if the left-hand side and right-hand side
+  are the equal up to renaming of variables. The most common permutative rule
+  is commutativity: \<open>?x + ?y = ?y + ?x\<close>. Other examples include \<open>(?x - ?y) -
+  ?z = (?x - ?z) - ?y\<close> in arithmetic and \<open>insert ?x (insert ?y ?A) = insert ?y
+  (insert ?x ?A)\<close> for sets. Such rules are common enough to merit special
+  attention.
 
-  Because ordinary rewriting loops given such rules, the Simplifier
-  employs a special strategy, called \<^emph>\<open>ordered rewriting\<close>.
-  Permutative rules are detected and only applied if the rewriting
-  step decreases the redex wrt.\ a given term ordering.  For example,
-  commutativity rewrites \<open>b + a\<close> to \<open>a + b\<close>, but then
-  stops, because the redex cannot be decreased further in the sense of
-  the term ordering.
+  Because ordinary rewriting loops given such rules, the Simplifier employs a
+  special strategy, called \<^emph>\<open>ordered rewriting\<close>. Permutative rules are
+  detected and only applied if the rewriting step decreases the redex wrt.\ a
+  given term ordering. For example, commutativity rewrites \<open>b + a\<close> to \<open>a + b\<close>,
+  but then stops, because the redex cannot be decreased further in the sense
+  of the term ordering.
 
-  The default is lexicographic ordering of term structure, but this
-  could be also changed locally for special applications via
-  @{index_ML Simplifier.set_termless} in Isabelle/ML.
+  The default is lexicographic ordering of term structure, but this could be
+  also changed locally for special applications via @{index_ML
+  Simplifier.set_termless} in Isabelle/ML.
 
   \<^medskip>
-  Permutative rewrite rules are declared to the Simplifier
-  just like other rewrite rules.  Their special status is recognized
-  automatically, and their application is guarded by the term ordering
-  accordingly.\<close>
+  Permutative rewrite rules are declared to the Simplifier just like other
+  rewrite rules. Their special status is recognized automatically, and their
+  application is guarded by the term ordering accordingly.
+\<close>
 
 
 subsubsection \<open>Rewriting with AC operators\<close>
 
-text \<open>Ordered rewriting is particularly effective in the case of
-  associative-commutative operators.  (Associativity by itself is not
-  permutative.)  When dealing with an AC-operator \<open>f\<close>, keep
-  the following points in mind:
+text \<open>
+  Ordered rewriting is particularly effective in the case of
+  associative-commutative operators. (Associativity by itself is not
+  permutative.) When dealing with an AC-operator \<open>f\<close>, keep the following
+  points in mind:
 
-  \<^item> The associative law must always be oriented from left to
-  right, namely \<open>f (f x y) z = f x (f y z)\<close>.  The opposite
-  orientation, if used with commutativity, leads to looping in
-  conjunction with the standard term order.
+    \<^item> The associative law must always be oriented from left to right, namely
+    \<open>f (f x y) z = f x (f y z)\<close>. The opposite orientation, if used with
+    commutativity, leads to looping in conjunction with the standard term
+    order.
 
-  \<^item> To complete your set of rewrite rules, you must add not just
-  associativity (A) and commutativity (C) but also a derived rule
-  \<^emph>\<open>left-commutativity\<close> (LC): \<open>f x (f y z) = f y (f x z)\<close>.
-
+    \<^item> To complete your set of rewrite rules, you must add not just
+    associativity (A) and commutativity (C) but also a derived rule
+    \<^emph>\<open>left-commutativity\<close> (LC): \<open>f x (f y z) = f y (f x z)\<close>.
 
   Ordered rewriting with the combination of A, C, and LC sorts a term
   lexicographically --- the rewriting engine imitates bubble-sort.
@@ -704,9 +674,11 @@
 
 lemmas AC_rules = assoc commute left_commute
 
-text \<open>Thus the Simplifier is able to establish equalities with
-  arbitrary permutations of subterms, by normalizing to a common
-  standard form.  For example:\<close>
+text \<open>
+  Thus the Simplifier is able to establish equalities with arbitrary
+  permutations of subterms, by normalizing to a common standard form. For
+  example:
+\<close>
 
 lemma "(b \<bullet> c) \<bullet> a = xxx"
   apply (simp only: AC_rules)
@@ -719,10 +691,11 @@
 
 end
 
-text \<open>Martin and Nipkow @{cite "martin-nipkow"} discuss the theory and
-  give many examples; other algebraic structures are amenable to
-  ordered rewriting, such as Boolean rings.  The Boyer-Moore theorem
-  prover @{cite bm88book} also employs ordered rewriting.
+text \<open>
+  Martin and Nipkow @{cite "martin-nipkow"} discuss the theory and give many
+  examples; other algebraic structures are amenable to ordered rewriting, such
+  as Boolean rings. The Boyer-Moore theorem prover @{cite bm88book} also
+  employs ordered rewriting.
 \<close>
 
 
@@ -761,29 +734,29 @@
   These attributes and configurations options control various aspects of
   Simplifier tracing and debugging.
 
-  \<^descr> @{attribute simp_trace} makes the Simplifier output internal
-  operations.  This includes rewrite steps, but also bookkeeping like
-  modifications of the simpset.
+  \<^descr> @{attribute simp_trace} makes the Simplifier output internal operations.
+  This includes rewrite steps, but also bookkeeping like modifications of the
+  simpset.
 
-  \<^descr> @{attribute simp_trace_depth_limit} limits the effect of
-  @{attribute simp_trace} to the given depth of recursive Simplifier
-  invocations (when solving conditions of rewrite rules).
+  \<^descr> @{attribute simp_trace_depth_limit} limits the effect of @{attribute
+  simp_trace} to the given depth of recursive Simplifier invocations (when
+  solving conditions of rewrite rules).
 
-  \<^descr> @{attribute simp_debug} makes the Simplifier output some extra
-  information about internal operations.  This includes any attempted
-  invocation of simplification procedures.
+  \<^descr> @{attribute simp_debug} makes the Simplifier output some extra information
+  about internal operations. This includes any attempted invocation of
+  simplification procedures.
 
   \<^descr> @{attribute simp_trace_new} controls Simplifier tracing within
   Isabelle/PIDE applications, notably Isabelle/jEdit @{cite "isabelle-jedit"}.
-  This provides a hierarchical representation of the rewriting steps
-  performed by the Simplifier.
+  This provides a hierarchical representation of the rewriting steps performed
+  by the Simplifier.
 
   Users can configure the behaviour by specifying breakpoints, verbosity and
   enabling or disabling the interactive mode. In normal verbosity (the
-  default), only rule applications matching a breakpoint will be shown to
-  the user. In full verbosity, all rule applications will be logged.
-  Interactive mode interrupts the normal flow of the Simplifier and defers
-  the decision how to continue to the user via some GUI dialog.
+  default), only rule applications matching a breakpoint will be shown to the
+  user. In full verbosity, all rule applications will be logged. Interactive
+  mode interrupts the normal flow of the Simplifier and defers the decision
+  how to continue to the user via some GUI dialog.
 
   \<^descr> @{attribute simp_break} declares term or theorem breakpoints for
   @{attribute simp_trace_new} as described above. Term breakpoints are
@@ -800,21 +773,19 @@
 
 subsection \<open>Simplification procedures \label{sec:simproc}\<close>
 
-text \<open>Simplification procedures are ML functions that produce proven
-  rewrite rules on demand.  They are associated with higher-order
-  patterns that approximate the left-hand sides of equations.  The
-  Simplifier first matches the current redex against one of the LHS
-  patterns; if this succeeds, the corresponding ML function is
-  invoked, passing the Simplifier context and redex term.  Thus rules
-  may be specifically fashioned for particular situations, resulting
-  in a more powerful mechanism than term rewriting by a fixed set of
-  rules.
+text \<open>
+  Simplification procedures are ML functions that produce proven rewrite rules
+  on demand. They are associated with higher-order patterns that approximate
+  the left-hand sides of equations. The Simplifier first matches the current
+  redex against one of the LHS patterns; if this succeeds, the corresponding
+  ML function is invoked, passing the Simplifier context and redex term. Thus
+  rules may be specifically fashioned for particular situations, resulting in
+  a more powerful mechanism than term rewriting by a fixed set of rules.
 
-  Any successful result needs to be a (possibly conditional) rewrite
-  rule \<open>t \<equiv> u\<close> that is applicable to the current redex.  The
-  rule will be applied just as any ordinary rewrite rule.  It is
-  expected to be already in \<^emph>\<open>internal form\<close>, bypassing the
-  automatic preprocessing of object-level equivalences.
+  Any successful result needs to be a (possibly conditional) rewrite rule \<open>t \<equiv>
+  u\<close> that is applicable to the current redex. The rule will be applied just as
+  any ordinary rewrite rule. It is expected to be already in \<^emph>\<open>internal form\<close>,
+  bypassing the automatic preprocessing of object-level equivalences.
 
   \begin{matharray}{rcl}
     @{command_def "simproc_setup"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
@@ -829,40 +800,39 @@
     @@{attribute simproc} (('add' ':')? | 'del' ':') (@{syntax name}+)
   \<close>}
 
-  \<^descr> @{command "simproc_setup"} defines a named simplification
-  procedure that is invoked by the Simplifier whenever any of the
-  given term patterns match the current redex.  The implementation,
-  which is provided as ML source text, needs to be of type @{ML_type
-  "morphism -> simpset -> cterm -> thm option"}, where the @{ML_type
-  cterm} represents the current redex \<open>r\<close> and the result is
-  supposed to be some proven rewrite rule \<open>r \<equiv> r'\<close> (or a
-  generalized version), or @{ML NONE} to indicate failure.  The
-  @{ML_type simpset} argument holds the full context of the current
-  Simplifier invocation, including the actual Isar proof context.  The
-  @{ML_type morphism} informs about the difference of the original
-  compilation context wrt.\ the one of the actual application later
-  on.  The optional @{keyword "identifier"} specifies theorems that
-  represent the logical content of the abstract theory of this
-  simproc.
+  \<^descr> @{command "simproc_setup"} defines a named simplification procedure that
+  is invoked by the Simplifier whenever any of the given term patterns match
+  the current redex. The implementation, which is provided as ML source text,
+  needs to be of type
+  @{ML_type "morphism -> Proof.context -> cterm -> thm option"}, where the
+  @{ML_type cterm} represents the current redex \<open>r\<close> and the result is supposed
+  to be some proven rewrite rule \<open>r \<equiv> r'\<close> (or a generalized version), or @{ML
+  NONE} to indicate failure. The @{ML_type Proof.context} argument holds the
+  full context of the current Simplifier invocation. The @{ML_type morphism}
+  informs about the difference of the original compilation context wrt.\ the
+  one of the actual application later on. The optional @{keyword "identifier"}
+  specifies theorems that represent the logical content of the abstract theory
+  of this simproc.
 
-  Morphisms and identifiers are only relevant for simprocs that are
-  defined within a local target context, e.g.\ in a locale.
+  Morphisms and identifiers are only relevant for simprocs that are defined
+  within a local target context, e.g.\ in a locale.
 
-  \<^descr> \<open>simproc add: name\<close> and \<open>simproc del: name\<close>
-  add or delete named simprocs to the current Simplifier context.  The
-  default is to add a simproc.  Note that @{command "simproc_setup"}
-  already adds the new simproc to the subsequent context.
+  \<^descr> \<open>simproc add: name\<close> and \<open>simproc del: name\<close> add or delete named simprocs
+  to the current Simplifier context. The default is to add a simproc. Note
+  that @{command "simproc_setup"} already adds the new simproc to the
+  subsequent context.
 \<close>
 
 
 subsubsection \<open>Example\<close>
 
-text \<open>The following simplification procedure for @{thm
-  [source=false, show_types] unit_eq} in HOL performs fine-grained
-  control over rule application, beyond higher-order pattern matching.
-  Declaring @{thm unit_eq} as @{attribute simp} directly would make
-  the Simplifier loop!  Note that a version of this simplification
-  procedure is already active in Isabelle/HOL.\<close>
+text \<open>
+  The following simplification procedure for @{thm [source = false,
+  show_types] unit_eq} in HOL performs fine-grained control over rule
+  application, beyond higher-order pattern matching. Declaring @{thm unit_eq}
+  as @{attribute simp} directly would make the Simplifier loop! Note that a
+  version of this simplification procedure is already active in Isabelle/HOL.
+\<close>
 
 (*<*)experiment begin(*>*)
 simproc_setup unit ("x::unit") =
@@ -871,20 +841,20 @@
     else SOME (mk_meta_eq @{thm unit_eq})\<close>
 (*<*)end(*>*)
 
-text \<open>Since the Simplifier applies simplification procedures
-  frequently, it is important to make the failure check in ML
-  reasonably fast.\<close>
+text \<open>
+  Since the Simplifier applies simplification procedures frequently, it is
+  important to make the failure check in ML reasonably fast.\<close>
 
 
 subsection \<open>Configurable Simplifier strategies \label{sec:simp-strategies}\<close>
 
-text \<open>The core term-rewriting engine of the Simplifier is normally
-  used in combination with some add-on components that modify the
-  strategy and allow to integrate other non-Simplifier proof tools.
-  These may be reconfigured in ML as explained below.  Even if the
-  default strategies of object-logics like Isabelle/HOL are used
-  unchanged, it helps to understand how the standard Simplifier
-  strategies work.\<close>
+text \<open>
+  The core term-rewriting engine of the Simplifier is normally used in
+  combination with some add-on components that modify the strategy and allow
+  to integrate other non-Simplifier proof tools. These may be reconfigured in
+  ML as explained below. Even if the default strategies of object-logics like
+  Isabelle/HOL are used unchanged, it helps to understand how the standard
+  Simplifier strategies work.\<close>
 
 
 subsubsection \<open>The subgoaler\<close>
@@ -897,21 +867,20 @@
   \end{mldecls}
 
   The subgoaler is the tactic used to solve subgoals arising out of
-  conditional rewrite rules or congruence rules.  The default should
-  be simplification itself.  In rare situations, this strategy may
-  need to be changed.  For example, if the premise of a conditional
-  rule is an instance of its conclusion, as in \<open>Suc ?m < ?n \<Longrightarrow>
-  ?m < ?n\<close>, the default strategy could loop.  % FIXME !??
+  conditional rewrite rules or congruence rules. The default should be
+  simplification itself. In rare situations, this strategy may need to be
+  changed. For example, if the premise of a conditional rule is an instance of
+  its conclusion, as in \<open>Suc ?m < ?n \<Longrightarrow> ?m < ?n\<close>, the default strategy could
+  loop. % FIXME !??
 
-  \<^descr> @{ML Simplifier.set_subgoaler}~\<open>tac ctxt\<close> sets the
-  subgoaler of the context to \<open>tac\<close>.  The tactic will
-  be applied to the context of the running Simplifier instance.
+    \<^descr> @{ML Simplifier.set_subgoaler}~\<open>tac ctxt\<close> sets the subgoaler of the
+    context to \<open>tac\<close>. The tactic will be applied to the context of the running
+    Simplifier instance.
 
-  \<^descr> @{ML Simplifier.prems_of}~\<open>ctxt\<close> retrieves the current
-  set of premises from the context.  This may be non-empty only if
-  the Simplifier has been told to utilize local assumptions in the
-  first place (cf.\ the options in \secref{sec:simp-meth}).
-
+    \<^descr> @{ML Simplifier.prems_of}~\<open>ctxt\<close> retrieves the current set of premises
+    from the context. This may be non-empty only if the Simplifier has been
+    told to utilize local assumptions in the first place (cf.\ the options in
+    \secref{sec:simp-meth}).
 
   As an example, consider the following alternative subgoaler:
 \<close>
@@ -923,9 +892,9 @@
     asm_simp_tac ctxt
 \<close>
 
-text \<open>This tactic first tries to solve the subgoal by assumption or
-  by resolving with with one of the premises, calling simplification
-  only if that fails.\<close>
+text \<open>
+  This tactic first tries to solve the subgoal by assumption or by resolving
+  with with one of the premises, calling simplification only if that fails.\<close>
 
 
 subsubsection \<open>The solver\<close>
@@ -941,89 +910,76 @@
   @{index_ML_op addSSolver: "Proof.context * solver -> Proof.context"} \\
   \end{mldecls}
 
-  A solver is a tactic that attempts to solve a subgoal after
-  simplification.  Its core functionality is to prove trivial subgoals
-  such as @{prop "True"} and \<open>t = t\<close>, but object-logics might
-  be more ambitious.  For example, Isabelle/HOL performs a restricted
-  version of linear arithmetic here.
+  A solver is a tactic that attempts to solve a subgoal after simplification.
+  Its core functionality is to prove trivial subgoals such as @{prop "True"}
+  and \<open>t = t\<close>, but object-logics might be more ambitious. For example,
+  Isabelle/HOL performs a restricted version of linear arithmetic here.
 
-  Solvers are packaged up in abstract type @{ML_type solver}, with
-  @{ML Simplifier.mk_solver} as the only operation to create a solver.
+  Solvers are packaged up in abstract type @{ML_type solver}, with @{ML
+  Simplifier.mk_solver} as the only operation to create a solver.
 
   \<^medskip>
-  Rewriting does not instantiate unknowns.  For example,
-  rewriting alone cannot prove \<open>a \<in> ?A\<close> since this requires
-  instantiating \<open>?A\<close>.  The solver, however, is an arbitrary
-  tactic and may instantiate unknowns as it pleases.  This is the only
-  way the Simplifier can handle a conditional rewrite rule whose
-  condition contains extra variables.  When a simplification tactic is
-  to be combined with other provers, especially with the Classical
-  Reasoner, it is important whether it can be considered safe or not.
-  For this reason a simpset contains two solvers: safe and unsafe.
+  Rewriting does not instantiate unknowns. For example, rewriting alone cannot
+  prove \<open>a \<in> ?A\<close> since this requires instantiating \<open>?A\<close>. The solver, however,
+  is an arbitrary tactic and may instantiate unknowns as it pleases. This is
+  the only way the Simplifier can handle a conditional rewrite rule whose
+  condition contains extra variables. When a simplification tactic is to be
+  combined with other provers, especially with the Classical Reasoner, it is
+  important whether it can be considered safe or not. For this reason a
+  simpset contains two solvers: safe and unsafe.
 
-  The standard simplification strategy solely uses the unsafe solver,
-  which is appropriate in most cases.  For special applications where
-  the simplification process is not allowed to instantiate unknowns
-  within the goal, simplification starts with the safe solver, but may
-  still apply the ordinary unsafe one in nested simplifications for
-  conditional rules or congruences. Note that in this way the overall
-  tactic is not totally safe: it may instantiate unknowns that appear
-  also in other subgoals.
-
-  \<^descr> @{ML Simplifier.mk_solver}~\<open>name tac\<close> turns \<open>tac\<close> into a solver; the \<open>name\<close> is only attached as a
-  comment and has no further significance.
+  The standard simplification strategy solely uses the unsafe solver, which is
+  appropriate in most cases. For special applications where the simplification
+  process is not allowed to instantiate unknowns within the goal,
+  simplification starts with the safe solver, but may still apply the ordinary
+  unsafe one in nested simplifications for conditional rules or congruences.
+  Note that in this way the overall tactic is not totally safe: it may
+  instantiate unknowns that appear also in other subgoals.
 
-  \<^descr> \<open>ctxt setSSolver solver\<close> installs \<open>solver\<close> as
-  the safe solver of \<open>ctxt\<close>.
+  \<^descr> @{ML Simplifier.mk_solver}~\<open>name tac\<close> turns \<open>tac\<close> into a solver; the
+  \<open>name\<close> is only attached as a comment and has no further significance.
 
-  \<^descr> \<open>ctxt addSSolver solver\<close> adds \<open>solver\<close> as an
-  additional safe solver; it will be tried after the solvers which had
-  already been present in \<open>ctxt\<close>.
+  \<^descr> \<open>ctxt setSSolver solver\<close> installs \<open>solver\<close> as the safe solver of \<open>ctxt\<close>.
 
-  \<^descr> \<open>ctxt setSolver solver\<close> installs \<open>solver\<close> as the
-  unsafe solver of \<open>ctxt\<close>.
+  \<^descr> \<open>ctxt addSSolver solver\<close> adds \<open>solver\<close> as an additional safe solver; it
+  will be tried after the solvers which had already been present in \<open>ctxt\<close>.
 
-  \<^descr> \<open>ctxt addSolver solver\<close> adds \<open>solver\<close> as an
-  additional unsafe solver; it will be tried after the solvers which
-  had already been present in \<open>ctxt\<close>.
+  \<^descr> \<open>ctxt setSolver solver\<close> installs \<open>solver\<close> as the unsafe solver of \<open>ctxt\<close>.
+
+  \<^descr> \<open>ctxt addSolver solver\<close> adds \<open>solver\<close> as an additional unsafe solver; it
+  will be tried after the solvers which had already been present in \<open>ctxt\<close>.
 
 
   \<^medskip>
-  The solver tactic is invoked with the context of the
-  running Simplifier.  Further operations
-  may be used to retrieve relevant information, such as the list of
-  local Simplifier premises via @{ML Simplifier.prems_of} --- this
-  list may be non-empty only if the Simplifier runs in a mode that
-  utilizes local assumptions (see also \secref{sec:simp-meth}).  The
-  solver is also presented the full goal including its assumptions in
-  any case.  Thus it can use these (e.g.\ by calling @{ML
-  assume_tac}), even if the Simplifier proper happens to ignore local
-  premises at the moment.
+  The solver tactic is invoked with the context of the running Simplifier.
+  Further operations may be used to retrieve relevant information, such as the
+  list of local Simplifier premises via @{ML Simplifier.prems_of} --- this
+  list may be non-empty only if the Simplifier runs in a mode that utilizes
+  local assumptions (see also \secref{sec:simp-meth}). The solver is also
+  presented the full goal including its assumptions in any case. Thus it can
+  use these (e.g.\ by calling @{ML assume_tac}), even if the Simplifier proper
+  happens to ignore local premises at the moment.
 
   \<^medskip>
-  As explained before, the subgoaler is also used to solve
-  the premises of congruence rules.  These are usually of the form
-  \<open>s = ?x\<close>, where \<open>s\<close> needs to be simplified and
-  \<open>?x\<close> needs to be instantiated with the result.  Typically,
+  As explained before, the subgoaler is also used to solve the premises of
+  congruence rules. These are usually of the form \<open>s = ?x\<close>, where \<open>s\<close> needs to
+  be simplified and \<open>?x\<close> needs to be instantiated with the result. Typically,
   the subgoaler will invoke the Simplifier at some point, which will
-  eventually call the solver.  For this reason, solver tactics must be
-  prepared to solve goals of the form \<open>t = ?x\<close>, usually by
-  reflexivity.  In particular, reflexivity should be tried before any
-  of the fancy automated proof tools.
+  eventually call the solver. For this reason, solver tactics must be prepared
+  to solve goals of the form \<open>t = ?x\<close>, usually by reflexivity. In particular,
+  reflexivity should be tried before any of the fancy automated proof tools.
 
-  It may even happen that due to simplification the subgoal is no
-  longer an equality.  For example, \<open>False \<longleftrightarrow> ?Q\<close> could be
-  rewritten to \<open>\<not> ?Q\<close>.  To cover this case, the solver could
-  try resolving with the theorem \<open>\<not> False\<close> of the
+  It may even happen that due to simplification the subgoal is no longer an
+  equality. For example, \<open>False \<longleftrightarrow> ?Q\<close> could be rewritten to \<open>\<not> ?Q\<close>. To cover
+  this case, the solver could try resolving with the theorem \<open>\<not> False\<close> of the
   object-logic.
 
   \<^medskip>
   \begin{warn}
-  If a premise of a congruence rule cannot be proved, then the
-  congruence is ignored.  This should only happen if the rule is
-  \<^emph>\<open>conditional\<close> --- that is, contains premises not of the form
-  \<open>t = ?x\<close>.  Otherwise it indicates that some congruence rule,
-  or possibly the subgoaler or solver, is faulty.
+  If a premise of a congruence rule cannot be proved, then the congruence is
+  ignored. This should only happen if the rule is \<^emph>\<open>conditional\<close> --- that is,
+  contains premises not of the form \<open>t = ?x\<close>. Otherwise it indicates that some
+  congruence rule, or possibly the subgoaler or solver, is faulty.
   \end{warn}
 \<close>
 
@@ -1042,69 +998,62 @@
   @{index_ML Splitter.del_split: "thm -> Proof.context -> Proof.context"} \\
   \end{mldecls}
 
-  The looper is a list of tactics that are applied after
-  simplification, in case the solver failed to solve the simplified
-  goal.  If the looper succeeds, the simplification process is started
-  all over again.  Each of the subgoals generated by the looper is
-  attacked in turn, in reverse order.
+  The looper is a list of tactics that are applied after simplification, in
+  case the solver failed to solve the simplified goal. If the looper succeeds,
+  the simplification process is started all over again. Each of the subgoals
+  generated by the looper is attacked in turn, in reverse order.
 
-  A typical looper is \<^emph>\<open>case splitting\<close>: the expansion of a
-  conditional.  Another possibility is to apply an elimination rule on
-  the assumptions.  More adventurous loopers could start an induction.
+  A typical looper is \<^emph>\<open>case splitting\<close>: the expansion of a conditional.
+  Another possibility is to apply an elimination rule on the assumptions. More
+  adventurous loopers could start an induction.
 
-  \<^descr> \<open>ctxt setloop tac\<close> installs \<open>tac\<close> as the only
-  looper tactic of \<open>ctxt\<close>.
+    \<^descr> \<open>ctxt setloop tac\<close> installs \<open>tac\<close> as the only looper tactic of \<open>ctxt\<close>.
 
-  \<^descr> \<open>ctxt addloop (name, tac)\<close> adds \<open>tac\<close> as an
-  additional looper tactic with name \<open>name\<close>, which is
-  significant for managing the collection of loopers.  The tactic will
-  be tried after the looper tactics that had already been present in
-  \<open>ctxt\<close>.
+    \<^descr> \<open>ctxt addloop (name, tac)\<close> adds \<open>tac\<close> as an additional looper tactic
+    with name \<open>name\<close>, which is significant for managing the collection of
+    loopers. The tactic will be tried after the looper tactics that had
+    already been present in \<open>ctxt\<close>.
 
-  \<^descr> \<open>ctxt delloop name\<close> deletes the looper tactic that was
-  associated with \<open>name\<close> from \<open>ctxt\<close>.
+    \<^descr> \<open>ctxt delloop name\<close> deletes the looper tactic that was associated with
+    \<open>name\<close> from \<open>ctxt\<close>.
 
-  \<^descr> @{ML Splitter.add_split}~\<open>thm ctxt\<close> adds split tactics
-  for \<open>thm\<close> as additional looper tactics of \<open>ctxt\<close>.
+    \<^descr> @{ML Splitter.add_split}~\<open>thm ctxt\<close> adds split tactics for \<open>thm\<close> as
+    additional looper tactics of \<open>ctxt\<close>.
 
-  \<^descr> @{ML Splitter.del_split}~\<open>thm ctxt\<close> deletes the split
-  tactic corresponding to \<open>thm\<close> from the looper tactics of
-  \<open>ctxt\<close>.
+    \<^descr> @{ML Splitter.del_split}~\<open>thm ctxt\<close> deletes the split tactic
+    corresponding to \<open>thm\<close> from the looper tactics of \<open>ctxt\<close>.
 
-
-  The splitter replaces applications of a given function; the
-  right-hand side of the replacement can be anything.  For example,
-  here is a splitting rule for conditional expressions:
+  The splitter replaces applications of a given function; the right-hand side
+  of the replacement can be anything. For example, here is a splitting rule
+  for conditional expressions:
 
   @{text [display] "?P (if ?Q ?x ?y) \<longleftrightarrow> (?Q \<longrightarrow> ?P ?x) \<and> (\<not> ?Q \<longrightarrow> ?P ?y)"}
 
-  Another example is the elimination operator for Cartesian products
-  (which happens to be called @{const case_prod} in Isabelle/HOL:
+  Another example is the elimination operator for Cartesian products (which
+  happens to be called @{const case_prod} in Isabelle/HOL:
 
   @{text [display] "?P (case_prod ?f ?p) \<longleftrightarrow> (\<forall>a b. ?p = (a, b) \<longrightarrow> ?P (f a b))"}
 
-  For technical reasons, there is a distinction between case splitting
-  in the conclusion and in the premises of a subgoal.  The former is
-  done by @{ML Splitter.split_tac} with rules like @{thm [source]
-  if_split} or @{thm [source] option.split}, which do not split the
-  subgoal, while the latter is done by @{ML Splitter.split_asm_tac}
-  with rules like @{thm [source] if_split_asm} or @{thm [source]
-  option.split_asm}, which split the subgoal.  The function @{ML
-  Splitter.add_split} automatically takes care of which tactic to
-  call, analyzing the form of the rules given as argument; it is the
-  same operation behind \<open>split\<close> attribute or method modifier
-  syntax in the Isar source language.
+  For technical reasons, there is a distinction between case splitting in the
+  conclusion and in the premises of a subgoal. The former is done by @{ML
+  Splitter.split_tac} with rules like @{thm [source] if_split} or @{thm
+  [source] option.split}, which do not split the subgoal, while the latter is
+  done by @{ML Splitter.split_asm_tac} with rules like @{thm [source]
+  if_split_asm} or @{thm [source] option.split_asm}, which split the subgoal.
+  The function @{ML Splitter.add_split} automatically takes care of which
+  tactic to call, analyzing the form of the rules given as argument; it is the
+  same operation behind \<open>split\<close> attribute or method modifier syntax in the
+  Isar source language.
 
-  Case splits should be allowed only when necessary; they are
-  expensive and hard to control.  Case-splitting on if-expressions in
-  the conclusion is usually beneficial, so it is enabled by default in
-  Isabelle/HOL and Isabelle/FOL/ZF.
+  Case splits should be allowed only when necessary; they are expensive and
+  hard to control. Case-splitting on if-expressions in the conclusion is
+  usually beneficial, so it is enabled by default in Isabelle/HOL and
+  Isabelle/FOL/ZF.
 
   \begin{warn}
-  With @{ML Splitter.split_asm_tac} as looper component, the
-  Simplifier may split subgoals!  This might cause unexpected problems
-  in tactic expressions that silently assume 0 or 1 subgoals after
-  simplification.
+  With @{ML Splitter.split_asm_tac} as looper component, the Simplifier may
+  split subgoals! This might cause unexpected problems in tactic expressions
+  that silently assume 0 or 1 subgoals after simplification.
   \end{warn}
 \<close>
 
@@ -1123,18 +1072,17 @@
     opt: '(' ('no_asm' | 'no_asm_simp' | 'no_asm_use') ')'
   \<close>}
 
-  \<^descr> @{attribute simplified}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> causes a theorem to
-  be simplified, either by exactly the specified rules \<open>a\<^sub>1, \<dots>,
-  a\<^sub>n\<close>, or the implicit Simplifier context if no arguments are given.
-  The result is fully simplified by default, including assumptions and
-  conclusion; the options \<open>no_asm\<close> etc.\ tune the Simplifier in
-  the same way as the for the \<open>simp\<close> method.
+  \<^descr> @{attribute simplified}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> causes a theorem to be simplified,
+  either by exactly the specified rules \<open>a\<^sub>1, \<dots>, a\<^sub>n\<close>, or the implicit
+  Simplifier context if no arguments are given. The result is fully simplified
+  by default, including assumptions and conclusion; the options \<open>no_asm\<close> etc.\
+  tune the Simplifier in the same way as the for the \<open>simp\<close> method.
 
-  Note that forward simplification restricts the Simplifier to its
-  most basic operation of term rewriting; solver and looper tactics
-  (\secref{sec:simp-strategies}) are \<^emph>\<open>not\<close> involved here.  The
-  @{attribute simplified} attribute should be only rarely required
-  under normal circumstances.
+  Note that forward simplification restricts the Simplifier to its most basic
+  operation of term rewriting; solver and looper tactics
+  (\secref{sec:simp-strategies}) are \<^emph>\<open>not\<close> involved here. The @{attribute
+  simplified} attribute should be only rarely required under normal
+  circumstances.
 \<close>
 
 
--- a/src/Pure/Concurrent/future.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/Concurrent/future.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -99,6 +99,13 @@
 fun peek x = Single_Assignment.peek (result_of x);
 fun is_finished x = is_some (peek x);
 
+val _ =
+  PolyML.addPrettyPrinter (fn depth => fn pretty => fn x =>
+    (case peek x of
+      NONE => PolyML.PrettyString "<future>"
+    | SOME (Exn.Exn _) => PolyML.PrettyString "<failed>"
+    | SOME (Exn.Res y) => pretty (y, depth)));
+
 
 
 (** scheduling **)
@@ -665,4 +672,3 @@
 end;
 
 type 'a future = 'a Future.future;
-
--- a/src/Pure/Concurrent/lazy.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/Concurrent/lazy.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -99,7 +99,16 @@
   if is_finished x then Future.value_result (force_result x)
   else (singleton o Future.forks) params (fn () => force x);
 
+
+(* toplevel pretty printing *)
+
+val _ =
+  PolyML.addPrettyPrinter (fn depth => fn pretty => fn x =>
+    (case peek x of
+      NONE => PolyML.PrettyString "<lazy>"
+    | SOME (Exn.Exn _) => PolyML.PrettyString "<failed>"
+    | SOME (Exn.Res y) => pretty (y, depth)));
+
 end;
 
 type 'a lazy = 'a Lazy.lazy;
-
--- a/src/Pure/Concurrent/standard_thread.scala	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/Concurrent/standard_thread.scala	Fri Mar 18 20:29:50 2016 +0100
@@ -11,8 +11,6 @@
 import java.lang.Thread
 import java.util.concurrent.{ThreadPoolExecutor, TimeUnit, LinkedBlockingQueue, ThreadFactory}
 
-import scala.concurrent.{ExecutionContext, ExecutionContextExecutor}
-
 
 object Standard_Thread
 {
@@ -50,9 +48,6 @@
       executor
     }
 
-  lazy val execution_context: ExecutionContextExecutor =
-    ExecutionContext.fromExecutorService(pool)
-
 
   /* delayed events */
 
--- a/src/Pure/Concurrent/synchronized.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/Concurrent/synchronized.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -66,4 +66,9 @@
 
 end;
 
+
+(* toplevel pretty printing *)
+
+val _ = PolyML.addPrettyPrinter (fn depth => fn pretty => fn var => pretty (value var, depth));
+
 end;
--- a/src/Pure/Concurrent/task_queue.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/Concurrent/task_queue.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -394,4 +394,10 @@
         | NONE => ((NONE, deps'), queue)))
   end;
 
+
+(* toplevel pretty printing *)
+
+val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o Pretty.str o str_of_task);
+val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o Pretty.str o str_of_group);
+
 end;
--- a/src/Pure/General/binding.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/General/binding.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -35,7 +35,6 @@
   val concealed: binding -> binding
   val pretty: binding -> Pretty.T
   val print: binding -> string
-  val pp: binding -> Pretty.T
   val bad: binding -> string
   val check: binding -> unit
   val name_spec: scope list -> (string * bool) list -> binding ->
@@ -155,7 +154,7 @@
 
 val print = Pretty.unformatted_string_of o pretty;
 
-val pp = pretty o set_pos Position.none;
+val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o pretty o set_pos Position.none);
 
 
 (* check *)
@@ -191,6 +190,7 @@
       andalso error (bad binding);
   in {restriction = restriction, concealed = concealed, spec = if null spec2 then [] else spec} end;
 
+
 end;
 
 type binding = Binding.binding;
--- a/src/Pure/General/path.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/General/path.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -173,6 +173,8 @@
 
 val print = Pretty.unformatted_string_of o pretty;
 
+val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o pretty);
+
 
 (* base element *)
 
--- a/src/Pure/General/pretty.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/General/pretty.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -72,8 +72,10 @@
   val block_enclose: T * T -> T list -> T
   val writeln_chunks: T list -> unit
   val writeln_chunks2: T list -> unit
-  val to_ML: T -> ML_Pretty.pretty
+  val to_ML: int -> T -> ML_Pretty.pretty
   val from_ML: ML_Pretty.pretty -> T
+  val to_polyml: T -> PolyML.pretty
+  val from_polyml: PolyML.pretty -> T
 end;
 
 structure Pretty: PRETTY =
@@ -372,13 +374,14 @@
 
 
 
-(** ML toplevel pretty printing **)
+(** toplevel pretty printing **)
 
-fun to_ML (Block (m, consistent, indent, prts, _)) =
-      ML_Pretty.Block (m, consistent, FixedInt.fromInt indent, map to_ML prts)
-  | to_ML (Break (force, wd, ind)) =
+fun to_ML 0 (Block _) = ML_Pretty.str "..."
+  | to_ML depth (Block (m, consistent, indent, prts, _)) =
+      ML_Pretty.Block (m, consistent, FixedInt.fromInt indent, map (to_ML (depth - 1)) prts)
+  | to_ML _ (Break (force, wd, ind)) =
       ML_Pretty.Break (force, FixedInt.fromInt wd, FixedInt.fromInt ind)
-  | to_ML (Str (s, len)) = ML_Pretty.String (s, FixedInt.fromInt len);
+  | to_ML _ (Str (s, len)) = ML_Pretty.String (s, FixedInt.fromInt len);
 
 fun from_ML (ML_Pretty.Block (m, consistent, indent, prts)) =
       make_block m consistent (FixedInt.toInt indent) (map from_ML prts)
@@ -386,6 +389,14 @@
       Break (force, FixedInt.toInt wd, FixedInt.toInt ind)
   | from_ML (ML_Pretty.String (s, len)) = Str (s, FixedInt.toInt len);
 
+val to_polyml = to_ML ~1 #> ML_Pretty.to_polyml;
+val from_polyml = ML_Pretty.from_polyml #> from_ML;
+
 end;
 
+val _ =
+  PolyML.addPrettyPrinter (fn depth => fn _ => ML_Pretty.to_polyml o to_ML (depth * 2 + 1) o quote);
+
+val _ = PolyML.addPrettyPrinter (fn _ => fn _ => to_polyml o position);
+
 end;
--- a/src/Pure/General/secure.ML	Fri Mar 18 12:54:20 2016 +0100
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,27 +0,0 @@
-(*  Title:      Pure/General/secure.ML
-    Author:     Makarius
-
-Secure critical operations.
-*)
-
-signature SECURE =
-sig
-  val set_secure: unit -> unit
-  val is_secure: unit -> bool
-  val deny: string -> unit
-  val deny_ml: unit -> unit
-end;
-
-structure Secure: SECURE =
-struct
-
-val secure = Unsynchronized.ref false;
-
-fun set_secure () = secure := true;
-fun is_secure () = ! secure;
-
-fun deny msg = if is_secure () then error msg else ();
-
-fun deny_ml () = deny "Cannot evaluate ML source in secure mode";
-
-end;
--- a/src/Pure/General/sha1.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/General/sha1.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -1,8 +1,8 @@
 (*  Title:      Pure/General/sha1.ML
     Author:     Makarius
+    Author:     Sascha Boehme, TU Muenchen
 
-Digesting strings according to SHA-1 (see RFC 3174) -- relatively slow
-version in pure ML.
+Digesting strings according to SHA-1 (see RFC 3174).
 *)
 
 signature SHA1 =
@@ -11,11 +11,16 @@
   val digest: string -> digest
   val rep: digest -> string
   val fake: string -> digest
+  val test_samples: unit -> unit
 end;
 
 structure SHA1: SHA1 =
 struct
 
+(** internal implementation in ML -- relatively slow **)
+
+local
+
 (* 32bit words *)
 
 infix 4 << >>;
@@ -45,7 +50,7 @@
 
 (* padding *)
 
-fun pack_bytes 0 n = ""
+fun pack_bytes 0 _ = ""
   | pack_bytes k n = pack_bytes (k - 1) (n div 256) ^ chr (n mod 256);
 
 fun padded_text str =
@@ -61,7 +66,7 @@
   in ((len + size padding) div 4, word) end;
 
 
-(* digest_string *)
+(* digest_string_internal *)
 
 fun digest_word (i, w, {a, b, c, d, e}) =
   let
@@ -87,7 +92,9 @@
      e = d}
   end;
 
-fun digest_string str =
+in
+
+fun digest_string_internal str =
   let
     val (text_len, text) = padded_text str;
 
@@ -129,14 +136,89 @@
     val hex = hex_word o hash;
   in hex 0 ^ hex 1 ^ hex 2 ^ hex 3 ^ hex 4 end;
 
+end;
 
-(* type digest *)
+
+(** external implementation in C **)
+
+local
+
+(* digesting *)
+
+fun hex_digit i = if i < 10 then chr (ord "0" + i) else chr (ord "a" + i - 10);
+
+fun hex_string arr i =
+  let val c = CInterface.fromCchar (CInterface.offset i CInterface.Cchar arr)
+  in (op ^) (apply2 hex_digit (Integer.div_mod (Char.ord c) 16)) end
+
+val lib_path =
+  ("$ML_HOME/" ^ (if ML_System.platform_is_windows then "sha1.dll" else "libsha1.so"))
+  |> Path.explode;
+
+val STRING_INPUT_BYTES =
+  CInterface.mkConversion undefined (CInterface.toCbytes o Byte.stringToBytes)
+    (CInterface.Cpointer CInterface.Cchar);
+
+in
+
+fun digest_string_external str =
+  let
+    val digest = CInterface.alloc 20 CInterface.Cchar;
+    val _ =
+      CInterface.call3
+        (CInterface.get_sym (File.platform_path lib_path) "sha1_buffer")
+        (STRING_INPUT_BYTES, CInterface.LONG, CInterface.POINTER)
+        CInterface.POINTER (str, size str, CInterface.address digest);
+  in fold (suffix o hex_string digest) (0 upto 19) "" end;
+
+end;
+
+
+
+(** type digest **)
 
 datatype digest = Digest of string;
 
-val digest = Digest o digest_string;
 fun rep (Digest s) = s;
-
 val fake = Digest;
 
+val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o Pretty.str o quote o rep);
+
+fun digest_string str = digest_string_external str
+  handle CInterface.Foreign msg =>
+    (warning (msg ^ "\nUsing slow ML implementation of SHA1.digest"); digest_string_internal str);
+
+val digest = Digest o digest_string;
+
+
+
+(** SHA1 samples found in the wild **)
+
+local
+
+fun check (msg, key) =
+  let val key' = rep (digest msg) in
+    if key = key' then ()
+    else
+      raise Fail ("SHA1 library integrity test failed on " ^ quote msg ^ ":\n" ^
+        key ^ " expected, but\n" ^ key' ^ " was found")
+  end;
+
+in
+
+fun test_samples () =
+  List.app check
+   [("", "da39a3ee5e6b4b0d3255bfef95601890afd80709"),
+    ("a", "86f7e437faa5a7fce15d1ddcb9eaeaea377667b8"),
+    ("abc", "a9993e364706816aba3e25717850c26c9cd0d89d"),
+    ("abcdefghijklmnopqrstuvwxyz", "32d10c7b8cf96570ca04ce37f2a19d84240d3a89"),
+    ("The quick brown fox jumps over the lazy dog", "2fd4e1c67a2d28fced849ee1bb76e7391b93eb12"),
+    (replicate_string 100 "\000", "ed4a77d1b56a118938788fc53037759b6c501e3d"),
+    ("a\000b", "4a3dec2d1f8245280855c42db0ee4239f917fdb8"),
+    ("\000\001", "3f29546453678b855931c174a97d6c0894b8f546")];
+
 end;
+
+val _ = test_samples ();
+
+end;
--- a/src/Pure/General/sha1_polyml.ML	Fri Mar 18 12:54:20 2016 +0100
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,52 +0,0 @@
-(*  Title:      Pure/General/sha1_polyml.ML
-    Author:     Sascha Boehme, TU Muenchen
-
-Digesting strings according to SHA-1 (see RFC 3174) -- based on an
-external implementation in C with a fallback to an internal
-implementation.
-*)
-
-structure SHA1: SHA1 =
-struct
-
-(* digesting *)
-
-fun hex_digit i = if i < 10 then chr (ord "0" + i) else chr (ord "a" + i - 10);
-
-fun hex_string arr i =
-  let val c = CInterface.fromCchar (CInterface.offset i CInterface.Cchar arr)
-  in (op ^) (apply2 hex_digit (Integer.div_mod (Char.ord c) 16)) end
-
-val lib_path =
-  ("$ML_HOME/" ^ (if ML_System.platform_is_windows then "sha1.dll" else "libsha1.so"))
-  |> Path.explode;
-
-val STRING_INPUT_BYTES =
-  CInterface.mkConversion undefined (CInterface.toCbytes o Byte.stringToBytes)
-    (CInterface.Cpointer CInterface.Cchar);
-
-fun digest_external str =
-  let
-    val digest = CInterface.alloc 20 CInterface.Cchar;
-    val _ =
-      CInterface.call3
-        (CInterface.get_sym (File.platform_path lib_path) "sha1_buffer")
-        (STRING_INPUT_BYTES, CInterface.LONG, CInterface.POINTER)
-        CInterface.POINTER (str, size str, CInterface.address digest);
-  in fold (suffix o hex_string digest) (0 upto 19) "" end;
-
-fun digest_string str = digest_external str
-  handle CInterface.Foreign msg =>
-    (warning (msg ^ "\nUsing slow ML implementation of SHA1.digest"); SHA1.rep (SHA1.digest str));
-
-
-(* type digest *)
-
-datatype digest = Digest of string;
-
-val digest = Digest o digest_string;
-fun rep (Digest s) = s;
-
-val fake = Digest;
-
-end;
--- a/src/Pure/General/sha1_samples.ML	Fri Mar 18 12:54:20 2016 +0100
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,36 +0,0 @@
-(*  Title:      Pure/General/sha1_samples.ML
-    Author:     Makarius
-
-Some SHA1 samples found in the wild.
-*)
-
-signature SHA1_SAMPLES =
-sig
-  val test: unit -> unit
-end;
-
-structure SHA1_Samples: SHA1_SAMPLES =
-struct
-
-fun check (msg, key) =
-  let val key' = SHA1.rep (SHA1.digest msg) in
-    if key = key' then ()
-    else
-      raise Fail ("SHA1 library integrity test failed on " ^ quote msg ^ ":\n" ^
-        key ^ " expected, but\n" ^ key' ^ " was found")
-  end;
-
-fun test () =
-  List.app check
-   [("", "da39a3ee5e6b4b0d3255bfef95601890afd80709"),
-    ("a", "86f7e437faa5a7fce15d1ddcb9eaeaea377667b8"),
-    ("abc", "a9993e364706816aba3e25717850c26c9cd0d89d"),
-    ("abcdefghijklmnopqrstuvwxyz", "32d10c7b8cf96570ca04ce37f2a19d84240d3a89"),
-    ("The quick brown fox jumps over the lazy dog", "2fd4e1c67a2d28fced849ee1bb76e7391b93eb12"),
-    (replicate_string 100 "\000", "ed4a77d1b56a118938788fc53037759b6c501e3d"),
-    ("a\000b", "4a3dec2d1f8245280855c42db0ee4239f917fdb8"),
-    ("\000\001", "3f29546453678b855931c174a97d6c0894b8f546")];
-
-val _ = test ();
-
-end;
--- a/src/Pure/Isar/proof.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/Isar/proof.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -173,6 +173,10 @@
       (context * thm list list -> state -> state) *
       (context * thm list list -> context -> context)};
 
+val _ =
+  PolyML.addPrettyPrinter (fn _ => fn _ => fn _: state =>
+    Pretty.to_polyml (Pretty.str "<Proof.state>"));
+
 fun make_goal (statement, using, goal, before_qed, after_qed) =
   Goal {statement = statement, using = using, goal = goal,
     before_qed = before_qed, after_qed = after_qed};
--- a/src/Pure/Isar/toplevel.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/Isar/toplevel.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -199,6 +199,8 @@
 
 fun pretty_abstract state = Pretty.str ("<Isar " ^ str_of_state state ^ ">");
 
+val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o pretty_abstract);
+
 
 
 (** toplevel transitions **)
--- a/src/Pure/ML/install_pp_polyml.ML	Fri Mar 18 12:54:20 2016 +0100
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,164 +0,0 @@
-(*  Title:      Pure/ML/install_pp_polyml.ML
-    Author:     Makarius
-
-ML toplevel pretty-printing for Poly/ML.
-*)
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn _: Pretty.T =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Pretty.str "<pretty>")));
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn lexicon =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Lexicon.pp_lexicon lexicon)));
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn task =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Pretty.str (Task_Queue.str_of_task task))));
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn group =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Pretty.str (Task_Queue.str_of_group group))));
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn pos =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Pretty.position pos)));
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn binding =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Binding.pp binding)));
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn th =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Proof_Display.pp_thm Thy_Info.pure_theory th)));
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn ct =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Proof_Display.pp_cterm Thy_Info.pure_theory ct)));
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn cT =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Proof_Display.pp_ctyp Thy_Info.pure_theory cT)));
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn T =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Proof_Display.pp_typ Thy_Info.pure_theory T)));
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn thy =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Context.pretty_thy thy)));
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn ctxt =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Proof_Display.pp_context ctxt)));
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn ast =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Ast.pretty_ast ast)));
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn path =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Path.pretty path)));
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn digest =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Pretty.str (quote (SHA1.rep digest)))));
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn _: Proof.state =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Pretty.str "<Proof.state>")));
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn st =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Toplevel.pretty_abstract st)));
-
-PolyML.addPrettyPrinter (fn _ => fn _ => fn morphism =>
-  ML_Pretty.to_polyml (Pretty.to_ML (Morphism.pretty morphism)));
-
-PolyML.addPrettyPrinter (fn depth => fn _ => fn str =>
-  ML_Pretty.to_polyml (Pretty.to_ML (ML_Syntax.pretty_string (FixedInt.toInt (depth * 100)) str)));
-
-PolyML.addPrettyPrinter (fn depth => fn _ => fn tree =>
-  ML_Pretty.to_polyml (Pretty.to_ML (XML.pretty (FixedInt.toInt depth) tree)));
-
-PolyML.addPrettyPrinter (fn depth => fn pretty => fn var =>
-  pretty (Synchronized.value var, depth));
-
-PolyML.addPrettyPrinter (fn depth => fn pretty => fn x =>
-  (case Future.peek x of
-    NONE => PolyML.PrettyString "<future>"
-  | SOME (Exn.Exn _) => PolyML.PrettyString "<failed>"
-  | SOME (Exn.Res y) => pretty (y, depth)));
-
-PolyML.addPrettyPrinter (fn depth => fn pretty => fn x =>
-  (case Lazy.peek x of
-    NONE => PolyML.PrettyString "<lazy>"
-  | SOME (Exn.Exn _) => PolyML.PrettyString "<failed>"
-  | SOME (Exn.Res y) => pretty (y, depth)));
-
-
-local
-
-open PolyML;
-val from_ML = Pretty.from_ML o ML_Pretty.from_polyml;
-fun prt_app name prt = Pretty.block [Pretty.str (name ^ " "), prt];
-fun prt_apps name = Pretty.enum "," (name ^ " (") ")";
-
-fun prt_term parens (dp: FixedInt.int) t =
-  if dp <= 0 then Pretty.str "..."
-  else
-    (case t of
-      _ $ _ =>
-        op :: (strip_comb t)
-        |> map_index (fn (i, u) => prt_term true (dp - FixedInt.fromInt i - 1) u)
-        |> Pretty.separate " $"
-        |> (if parens then Pretty.enclose "(" ")" else Pretty.block)
-    | Abs (a, T, b) =>
-        prt_apps "Abs"
-         [from_ML (prettyRepresentation (a, dp - 1)),
-          from_ML (prettyRepresentation (T, dp - 2)),
-          prt_term false (dp - 3) b]
-    | Const a => prt_app "Const" (from_ML (prettyRepresentation (a, dp - 1)))
-    | Free a => prt_app "Free" (from_ML (prettyRepresentation (a, dp - 1)))
-    | Var a => prt_app "Var" (from_ML (prettyRepresentation (a, dp - 1)))
-    | Bound a => prt_app "Bound" (from_ML (prettyRepresentation (a, dp - 1))));
-
-in
-
-val _ =
-  PolyML.addPrettyPrinter (fn depth => fn _ => fn t =>
-    ML_Pretty.to_polyml (Pretty.to_ML (prt_term false depth t)));
-
-local
-
-fun prt_proof parens dp prf =
-  if dp <= 0 then Pretty.str "..."
-  else
-    (case prf of
-      _ % _ => prt_proofs parens dp prf
-    | _ %% _ => prt_proofs parens dp prf
-    | Abst (a, T, b) =>
-        prt_apps "Abst"
-         [from_ML (prettyRepresentation (a, dp - 1)),
-          from_ML (prettyRepresentation (T, dp - 2)),
-          prt_proof false (dp - 3) b]
-    | AbsP (a, t, b) =>
-        prt_apps "AbsP"
-         [from_ML (prettyRepresentation (a, dp - 1)),
-          from_ML (prettyRepresentation (t, dp - 2)),
-          prt_proof false (dp - 3) b]
-    | Hyp t => prt_app "Hyp" (prt_term true (dp - 1) t)
-    | MinProof => Pretty.str "MinProof"
-    | PBound a => prt_app "PBound" (from_ML (prettyRepresentation (a, dp - 1)))
-    | PAxm a => prt_app "PAxm" (from_ML (prettyRepresentation (a, dp - 1)))
-    | OfClass a => prt_app "OfClass" (from_ML (prettyRepresentation (a, dp - 1)))
-    | Oracle a => prt_app "Oracle" (from_ML (prettyRepresentation (a, dp - 1)))
-    | Promise a => prt_app "Promise" (from_ML (prettyRepresentation (a, dp - 1)))
-    | PThm a => prt_app "PThm" (from_ML (prettyRepresentation (a, dp - 1))))
-
-and prt_proofs parens dp prf =
-  let
-    val (head, args) = strip_proof prf [];
-    val prts =
-      head (dp - 1) :: flat (map_index (fn (i, prt) => prt (dp - FixedInt.fromInt i - 2)) args);
-  in if parens then Pretty.enclose "(" ")" prts else Pretty.block prts end
-
-and strip_proof (p % t) res =
-      strip_proof p
-        ((fn d => [Pretty.str " %", Pretty.brk 1, from_ML (prettyRepresentation (t, d))]) :: res)
-  | strip_proof (p %% q) res =
-      strip_proof p ((fn d => [Pretty.str " %%", Pretty.brk 1, prt_proof true d q]) :: res)
-  | strip_proof p res = (fn d => prt_proof true d p, res);
-
-in
-
-val _ =
-  PolyML.addPrettyPrinter (fn depth => fn _ => fn prf =>
-    ML_Pretty.to_polyml (Pretty.to_ML (prt_proof false depth prf)));
-
-end;
-
-end;
--- a/src/Pure/ML/ml_antiquotation.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/ML/ml_antiquotation.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -38,8 +38,10 @@
     (fn src => fn () =>
       ML_Context.value_decl "position" (ML_Syntax.print_position (#2 (Token.name_of_src src)))) #>
 
+  inline (Binding.make ("make_string", @{here}))
+    (Args.context >> (ML_Pretty.make_string_fn o ML_Context.struct_name)) #>
+
   value (Binding.make ("binding", @{here}))
     (Scan.lift (Parse.position Args.name) >> ML_Syntax.make_binding));
 
 end;
-
--- a/src/Pure/ML/ml_antiquotations.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/ML/ml_antiquotations.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -21,9 +21,6 @@
   ML_Antiquotation.inline @{binding assert}
     (Scan.succeed "(fn b => if b then () else raise General.Fail \"Assertion failed\")") #>
 
-  ML_Antiquotation.inline @{binding make_string}
-    (Args.context >> (ml_make_string o ML_Context.struct_name)) #>
-
   ML_Antiquotation.declaration @{binding print}
     (Scan.lift (Scan.optional Args.name "Output.writeln"))
       (fn src => fn output => fn ctxt =>
@@ -36,7 +33,8 @@
             \  (" ^ output ^ ") o (fn s => s ^ Position.here (" ^
             ML_Syntax.print_position pos ^ "));\n";
           val body =
-            "(fn x => (" ^ struct_name ^ "." ^ a ^ " (" ^ ml_make_string struct_name ^ " x); x))";
+            "(fn x => (" ^ struct_name ^ "." ^ a ^
+              " (" ^ ML_Pretty.make_string_fn struct_name ^ " x); x))";
         in (K (env, body), ctxt') end));
 
 
--- a/src/Pure/ML/ml_compiler.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/ML/ml_compiler.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -65,7 +65,7 @@
           let
             val xml =
               ML_Name_Space.displayTypeExpression (types, depth, space)
-              |> ML_Pretty.from_polyml |> Pretty.from_ML |> Pretty.string_of
+              |> Pretty.from_polyml |> Pretty.string_of
               |> Output.output |> YXML.parse_body;
           in cons (pos, fn () => Markup.ML_typing, fn () => YXML.string_of_body xml) end
       end;
@@ -144,7 +144,6 @@
 
 fun eval (flags: flags) pos toks =
   let
-    val _ = Secure.deny_ml ();
     val space = ML_Env.make_name_space {SML = #SML flags, exchange = #exchange flags};
     val opt_context = Context.thread_data ();
 
@@ -193,7 +192,7 @@
         val pos = Exn_Properties.position_of_location loc;
         val txt =
           (if hard then "ML error" else "ML warning") ^ Position.here pos ^ ":\n" ^
-          Pretty.string_of (Pretty.from_ML (ML_Pretty.from_polyml msg));
+          Pretty.string_of (Pretty.from_polyml msg);
       in if hard then err txt else warn txt end;
 
 
@@ -205,8 +204,7 @@
       let
         fun display disp x =
           if depth > 0 then
-            (write (disp x |> ML_Pretty.from_polyml |> Pretty.from_ML |> Pretty.string_of);
-              write "\n")
+            (write (disp x |> Pretty.from_polyml |> Pretty.string_of); write "\n")
           else ();
 
         fun apply_fix (a, b) =
--- a/src/Pure/ML/ml_compiler0.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/ML/ml_compiler0.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -33,19 +33,20 @@
 
 fun ml_input start_line name txt =
   let
-    fun positions line (#"@" :: #"{" :: #"h" :: #"e" :: #"r" :: #"e" :: #"}" :: cs) res =
+    fun input line (#"@" :: #"{" :: #"h" :: #"e" :: #"r" :: #"e" :: #"}" :: cs) res =
           let val s = "(Position.line_file_only " ^ Int.toString line ^ " \"" ^ name ^ "\")"
-          in positions line cs (s :: res) end
-      | positions line (#"\\" :: #"<" :: cs) res = positions line cs ("\\\\<" :: res)
-      | positions line (#"\n" :: cs) res = positions (line + 1) cs ("\n" :: res)
-      | positions line (c :: cs) res = positions line cs (str c :: res)
-      | positions _ [] res = rev res;
-  in String.concat (positions start_line (String.explode txt) []) end;
+          in input line cs (s :: res) end
+      | input line (#"@" :: #"{" :: #"m" :: #"a" :: #"k" :: #"e" :: #"_" ::
+            #"s" :: #"t" :: #"r" :: #"i" :: #"n" :: #"g" :: #"}" :: cs) res =
+          input line cs (ML_Pretty.make_string_fn "" :: res)
+      | input line (#"\\" :: #"<" :: cs) res = input line cs ("\\\\<" :: res)
+      | input line (#"\n" :: cs) res = input (line + 1) cs ("\n" :: res)
+      | input line (c :: cs) res = input line cs (str c :: res)
+      | input _ [] res = rev res;
+  in String.concat (input start_line (String.explode txt) []) end;
 
 fun use_text ({name_space, here, print, error, ...}: context) {line, file, verbose, debug} text =
   let
-    val _ = Secure.deny_ml ();
-
     val current_line = Unsynchronized.ref line;
     val in_buffer = Unsynchronized.ref (String.explode (ml_input line file text));
     val out_buffer = Unsynchronized.ref ([]: string list);
--- a/src/Pure/ML/ml_env.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/ML/ml_env.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -77,7 +77,7 @@
 fun forget_structure name =
   Env.map (fn {bootstrap, tables, sml_tables, breakpoints} =>
     let
-      val _ = if bootstrap then ML_Name_Space.forget_global_structure name else ();
+      val _ = if bootstrap then ML_Name_Space.forget_structure name else ();
       val tables' =
         (#1 tables, #2 tables, #3 tables, Symtab.delete_safe name (#4 tables), #5 tables, #6 tables);
     in make_data (bootstrap, tables', sml_tables, breakpoints) end);
--- a/src/Pure/ML/ml_name_space.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/ML/ml_name_space.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -6,20 +6,23 @@
 
 structure ML_Name_Space =
 struct
+  val critical_structures = ["CInterface", "Foreign", "RunCall", "RuntimeCalls", "Signal"];
+
   open PolyML.NameSpace;
 
   type T = PolyML.NameSpace.nameSpace;
   val global = PolyML.globalNameSpace;
-  val forget_global_structure = PolyML.Compiler.forgetStructure;
 
   type valueVal = Values.value;
   fun displayVal (x, depth, space) = Values.printWithType (x, depth, SOME space);
   fun displayTypeExpression (x, depth, space) = Values.printType (x, depth, SOME space);
   val initial_val = List.filter (fn (a, _) => a <> "use" andalso a <> "exit") (#allVal global ());
+  val forget_val = PolyML.Compiler.forgetValue;
 
   type typeVal = TypeConstrs.typeConstr;
   fun displayType (x, depth, space) = TypeConstrs.print (x, depth, SOME space);
   val initial_type = #allType global ();
+  val forget_type = PolyML.Compiler.forgetType;
 
   type fixityVal = Infixes.fixity;
   fun displayFix (_: string, x) = Infixes.print x;
@@ -27,7 +30,10 @@
 
   type structureVal = Structures.structureVal;
   fun displayStruct (x, depth, space) = Structures.print (x, depth, SOME space);
-  val initial_structure = #allStruct global ();
+  val initial_structure =
+    List.filter (fn (a, _) => not (List.exists (fn b => a = b) critical_structures))
+      (#allStruct global ());
+  val forget_structure = PolyML.Compiler.forgetStructure;
 
   type signatureVal = Signatures.signatureVal;
   fun displaySig (x, depth, space) = Signatures.print (x, depth, SOME space);
--- a/src/Pure/ML/ml_options.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/ML/ml_options.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -66,12 +66,12 @@
 (* print depth *)
 
 val print_depth_raw =
-  Config.declare ("ML_print_depth", @{here}) (fn _ => Config.Int (get_default_print_depth ()));
+  Config.declare ("ML_print_depth", @{here}) (fn _ => Config.Int (ML_Pretty.get_print_depth ()));
 val print_depth = Config.int print_depth_raw;
 
 fun get_print_depth () =
   (case Context.thread_data () of
-    NONE => get_default_print_depth ()
+    NONE => ML_Pretty.get_print_depth ()
   | SOME context => Config.get_generic context print_depth);
 
 fun get_print_depth_default default =
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/src/Pure/ML/ml_pp.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -0,0 +1,113 @@
+(*  Title:      Pure/ML/ml_pp.ML
+    Author:     Makarius
+
+ML toplevel pretty-printing for logical entities.
+*)
+
+structure ML_PP: sig end =
+struct
+
+val _ =
+  PolyML.addPrettyPrinter
+    (fn _ => fn _ => Pretty.to_polyml o Proof_Display.pp_context);
+
+val _ =
+  PolyML.addPrettyPrinter
+    (fn _ => fn _ => Pretty.to_polyml o Proof_Display.pp_thm Thy_Info.pure_theory);
+
+val _ =
+  PolyML.addPrettyPrinter
+    (fn _ => fn _ => Pretty.to_polyml o Proof_Display.pp_cterm Thy_Info.pure_theory);
+
+val _ =
+  PolyML.addPrettyPrinter
+    (fn _ => fn _ => Pretty.to_polyml o Proof_Display.pp_ctyp Thy_Info.pure_theory);
+
+val _ =
+  PolyML.addPrettyPrinter
+    (fn _ => fn _ => Pretty.to_polyml o Proof_Display.pp_typ Thy_Info.pure_theory);
+
+
+local
+
+fun prt_app name prt = Pretty.block [Pretty.str (name ^ " "), prt];
+fun prt_apps name = Pretty.enum "," (name ^ " (") ")";
+
+fun prt_term parens (dp: FixedInt.int) t =
+  if dp <= 0 then Pretty.str "..."
+  else
+    (case t of
+      _ $ _ =>
+        op :: (strip_comb t)
+        |> map_index (fn (i, u) => prt_term true (dp - FixedInt.fromInt i - 1) u)
+        |> Pretty.separate " $"
+        |> (if parens then Pretty.enclose "(" ")" else Pretty.block)
+    | Abs (a, T, b) =>
+        prt_apps "Abs"
+         [Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1)),
+          Pretty.from_polyml (PolyML.prettyRepresentation (T, dp - 2)),
+          prt_term false (dp - 3) b]
+    | Const a => prt_app "Const" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1)))
+    | Free a => prt_app "Free" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1)))
+    | Var a => prt_app "Var" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1)))
+    | Bound a => prt_app "Bound" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1))));
+
+in
+
+val _ = PolyML.addPrettyPrinter (fn depth => fn _ => Pretty.to_polyml o prt_term false depth);
+
+
+local
+
+fun prt_proof parens dp prf =
+  if dp <= 0 then Pretty.str "..."
+  else
+    (case prf of
+      _ % _ => prt_proofs parens dp prf
+    | _ %% _ => prt_proofs parens dp prf
+    | Abst (a, T, b) =>
+        prt_apps "Abst"
+         [Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1)),
+          Pretty.from_polyml (PolyML.prettyRepresentation (T, dp - 2)),
+          prt_proof false (dp - 3) b]
+    | AbsP (a, t, b) =>
+        prt_apps "AbsP"
+         [Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1)),
+          Pretty.from_polyml (PolyML.prettyRepresentation (t, dp - 2)),
+          prt_proof false (dp - 3) b]
+    | Hyp t => prt_app "Hyp" (prt_term true (dp - 1) t)
+    | MinProof => Pretty.str "MinProof"
+    | PBound a => prt_app "PBound" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1)))
+    | PAxm a => prt_app "PAxm" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1)))
+    | OfClass a => prt_app "OfClass" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1)))
+    | Oracle a => prt_app "Oracle" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1)))
+    | Promise a => prt_app "Promise" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1)))
+    | PThm a => prt_app "PThm" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1))))
+
+and prt_proofs parens dp prf =
+  let
+    val (head, args) = strip_proof prf [];
+    val prts =
+      head (dp - 1) :: flat (map_index (fn (i, prt) => prt (dp - FixedInt.fromInt i - 2)) args);
+  in if parens then Pretty.enclose "(" ")" prts else Pretty.block prts end
+
+and strip_proof (p % t) res =
+      strip_proof p
+        ((fn d =>
+          [Pretty.str " %", Pretty.brk 1,
+            Pretty.from_polyml (PolyML.prettyRepresentation (t, d))]) :: res)
+  | strip_proof (p %% q) res =
+      strip_proof p ((fn d => [Pretty.str " %%", Pretty.brk 1, prt_proof true d q]) :: res)
+  | strip_proof p res = (fn d => prt_proof true d p, res);
+
+in
+
+val _ =
+  PolyML.addPrettyPrinter (fn depth => fn _ => fn prf =>
+    ML_Pretty.to_polyml (Pretty.to_ML ~1 (prt_proof false depth prf)));
+
+end;
+
+end;
+
+end;
--- a/src/Pure/ML/ml_pretty.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/ML/ml_pretty.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -17,11 +17,18 @@
   val enum: string -> string -> string -> ('a * int -> pretty) -> 'a list * int -> pretty
   val to_polyml: pretty -> PolyML.pretty
   val from_polyml: PolyML.pretty -> pretty
+  val get_print_depth: unit -> int
+  val print_depth: int -> unit
+  val format_polyml: int -> PolyML.pretty -> string
+  val format: int -> pretty -> string
+  val make_string_fn: string -> string
 end;
 
 structure ML_Pretty: ML_PRETTY =
 struct
 
+(* datatype pretty *)
+
 datatype pretty =
   Block of (string * string) * bool * FixedInt.int * pretty list |
   String of string * FixedInt.int |
@@ -81,4 +88,43 @@
           String (s, FixedInt.fromInt (case Int.fromString len of SOME i => i | NONE => size s))
   in convert "" end;
 
+
+(* default print depth *)
+
+local
+  val depth = Unsynchronized.ref 0;
+in
+  fun get_print_depth () = ! depth;
+  fun print_depth n = (depth := n; PolyML.print_depth n);
+  val _ = print_depth 10;
 end;
+
+
+(* format *)
+
+fun format_polyml margin prt =
+  let
+    val result = Unsynchronized.ref [];
+    val () = PolyML.prettyPrint (fn s => result := s :: ! result, margin) prt
+  in String.concat (List.rev (! result)) end;
+
+fun format margin = format_polyml margin o to_polyml;
+
+
+(* make string *)
+
+local
+  fun pretty_string_of s = "(fn x => Pretty.string_of (Pretty.from_polyml (" ^ s ^ ")))";
+  fun pretty_value depth = "PolyML.prettyRepresentation (x, FixedInt.fromInt (" ^ depth ^ "))";
+in
+
+fun make_string_fn local_env =
+  if local_env <> "" then
+    pretty_string_of (pretty_value (local_env ^ ".ML_print_depth ()"))
+  else if List.exists (fn (a, _) => a = "Pretty") (#allStruct ML_Name_Space.global ()) then
+    pretty_string_of (pretty_value "ML_Pretty.get_print_depth ()")
+  else "(fn x => ML_Pretty.format_polyml 78 (" ^ pretty_value "ML_Pretty.get_print_depth ()" ^ "))";
+
+end;
+
+end;
--- a/src/Pure/ML/ml_syntax.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/ML/ml_syntax.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -123,4 +123,8 @@
       else take (Int.max (max_len, 0)) body @ ["..."];
   in Pretty.str (quote (implode (map print_char body'))) end;
 
+val _ =
+  PolyML.addPrettyPrinter (fn depth => fn _ => fn str =>
+    Pretty.to_polyml (pretty_string (FixedInt.toInt (depth * 100)) str));
+
 end;
--- a/src/Pure/PIDE/xml.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/PIDE/xml.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -170,6 +170,9 @@
 fun pretty depth tree =
   Pretty.str (Buffer.content (buffer_of (Int.max (0, depth)) tree));
 
+val _ =
+  PolyML.addPrettyPrinter (fn depth => fn _ => Pretty.to_polyml o pretty (FixedInt.toInt depth));
+
 
 
 (** XML parsing **)
--- a/src/Pure/ROOT	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/ROOT	Fri Mar 18 20:29:50 2016 +0100
@@ -47,11 +47,8 @@
     "General/random.ML"
     "General/same.ML"
     "General/scan.ML"
-    "General/secure.ML"
     "General/seq.ML"
     "General/sha1.ML"
-    "General/sha1_polyml.ML"
-    "General/sha1_samples.ML"
     "General/socket_io.ML"
     "General/source.ML"
     "General/stack.ML"
@@ -101,7 +98,6 @@
     "ML/exn_debugger.ML"
     "ML/exn_properties.ML"
     "ML/fixed_int_dummy.ML"
-    "ML/install_pp_polyml.ML"
     "ML/ml_antiquotation.ML"
     "ML/ml_compiler.ML"
     "ML/ml_compiler0.ML"
@@ -113,6 +109,7 @@
     "ML/ml_lex.ML"
     "ML/ml_name_space.ML"
     "ML/ml_options.ML"
+    "ML/ml_pp.ML"
     "ML/ml_pretty.ML"
     "ML/ml_profiling.ML"
     "ML/ml_statistics.ML"
--- a/src/Pure/ROOT.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/ROOT.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -28,28 +28,21 @@
 use "Concurrent/multithreading.ML";
 
 use "Concurrent/unsynchronized.ML";
-val _ = PolyML.Compiler.forgetValue "ref";
-val _ = PolyML.Compiler.forgetType "ref";
+val _ = ML_Name_Space.forget_val "ref";
+val _ = ML_Name_Space.forget_type "ref";
 
 
 (* pervasive environment *)
 
-val _ = PolyML.Compiler.forgetValue "isSome";
-val _ = PolyML.Compiler.forgetValue "getOpt";
-val _ = PolyML.Compiler.forgetValue "valOf";
-val _ = PolyML.Compiler.forgetValue "foldl";
-val _ = PolyML.Compiler.forgetValue "foldr";
-val _ = PolyML.Compiler.forgetValue "print";
-val _ = PolyML.Compiler.forgetValue "explode";
-val _ = PolyML.Compiler.forgetValue "concat";
+val _ =
+  List.app ML_Name_Space.forget_val
+    ["isSome", "getOpt", "valOf", "foldl", "foldr", "print", "explode", "concat"];
 
 val ord = SML90.ord;
 val chr = SML90.chr;
 val raw_explode = SML90.explode;
 val implode = SML90.implode;
 
-fun quit () = exit 0;
-
 
 (* ML runtime system *)
 
@@ -63,20 +56,11 @@
 
 use "ML/ml_pretty.ML";
 
-local
-  val depth = Unsynchronized.ref 10;
-in
-  fun get_default_print_depth () = ! depth;
-  fun default_print_depth n = (depth := n; PolyML.print_depth n);
-  val _ = default_print_depth 10;
-end;
-
 val error_depth = PolyML.error_depth;
 
 
 (* ML compiler *)
 
-use "General/secure.ML";
 use "ML/ml_compiler0.ML";
 
 PolyML.Compiler.reportUnreferencedIds := true;
@@ -84,10 +68,6 @@
 PolyML.Compiler.printInAlphabeticalOrder := false;
 PolyML.Compiler.maxInlineSize := 80;
 
-fun ml_make_string struct_name =
-  "(fn x => Pretty.string_of (Pretty.from_ML (ML_Pretty.from_polyml (PolyML.prettyRepresentation (x, FixedInt.fromInt (" ^
-    struct_name ^ ".ML_print_depth ()))))))";
-
 
 (* ML debugger *)
 
@@ -162,10 +142,9 @@
 use "General/socket_io.ML";
 use "General/seq.ML";
 use "General/timing.ML";
+use "General/sha1.ML";
 
-use "General/sha1.ML";
-use "General/sha1_polyml.ML";
-use "General/sha1_samples.ML";
+val _ = List.app ML_Name_Space.forget_structure ML_Name_Space.critical_structures;
 
 use "PIDE/yxml.ML";
 use "PIDE/document_id.ML";
@@ -410,8 +389,7 @@
 
 structure Output: OUTPUT = Output;  (*seal system channels!*)
 
-
-use "ML/install_pp_polyml.ML";
+use "ML/ml_pp.ML";
 
 
 (* the Pure theory *)
--- a/src/Pure/Syntax/ast.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/Syntax/ast.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -66,6 +66,8 @@
 fun pretty_rule (lhs, rhs) =
   Pretty.block [pretty_ast lhs, Pretty.str "  ->", Pretty.brk 2, pretty_ast rhs];
 
+val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o pretty_ast);
+
 
 (* strip_positions *)
 
--- a/src/Pure/Syntax/lexicon.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/Syntax/lexicon.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -66,7 +66,6 @@
   val is_marked: string -> bool
   val dummy_type: term
   val fun_type: term
-  val pp_lexicon: Scan.lexicon -> Pretty.T
 end;
 
 structure Lexicon: LEXICON =
@@ -455,7 +454,8 @@
 
 (* toplevel pretty printing *)
 
-val pp_lexicon =
-  Pretty.str_list "{" "}" o map quote o Scan.dest_lexicon;
+val _ =
+  PolyML.addPrettyPrinter (fn _ => fn _ =>
+    Pretty.to_polyml o Pretty.str_list "{" "}" o map quote o Scan.dest_lexicon);
 
 end;
--- a/src/Pure/Syntax/parser.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/Syntax/parser.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -71,7 +71,7 @@
   N.B. that the chains parameter has the form [(from, [to])];
   prod_count is of type "int option" and is only updated if it is <> NONE*)
 fun add_prods _ chains lambdas prod_count [] = (chains, lambdas, prod_count)
-  | add_prods prods chains lambdas prod_count ((lhs, new_prod as (rhs, name, pri)) :: ps) =
+  | add_prods prods chains lambdas prod_count ((lhs, new_prod as (rhs, _, pri)) :: ps) =
       let
         val chain_from =
           (case (pri, rhs) of
@@ -95,7 +95,7 @@
           if is_none new_chain then ([], lambdas)
           else
             let (*lookahead of chain's source*)
-              val ((from_nts, from_tks), _) = Array.sub (prods, the new_chain);
+              val ((_, from_tks), _) = Array.sub (prods, the new_chain);
 
               (*copy from's lookahead to chain's destinations*)
               fun copy_lookahead [] added = added
@@ -195,7 +195,7 @@
                           (added_lambdas, added_starts)
                       | process_nts (nt :: nts) added_lambdas added_starts =
                           let
-                            val (lookahead as (old_nts, old_tks), nt_prods) = Array.sub (prods, nt);
+                            val ((old_nts, old_tks), nt_prods) = Array.sub (prods, nt);
 
                             (*existing productions whose lookahead may depend on l*)
                             val tk_prods =
@@ -235,7 +235,7 @@
           in propagate_lambda (subtract (op =) lambdas lambdas') added_starts lambdas' end;
 
         (*insert production into grammar*)
-        val (added_starts', prod_count') =
+        val (added_starts', _) =
           if is_some chain_from
           then (added_starts', prod_count)  (*don't store chain production*)
           else
@@ -255,7 +255,7 @@
 
               (*add lhs NT to list of dependent NTs in lookahead*)
               fun add_nts [] = ()
-                | add_nts (nt :: nts) =
+                | add_nts (nt :: _) =
                     let val ((old_nts, old_tks), ps) = Array.sub (prods, nt) in
                       if member (op =) old_nts lhs then ()
                       else Array.update (prods, nt, ((lhs :: old_nts, old_tks), ps))
@@ -567,7 +567,7 @@
   filter (fn (_, _, prec: int) => prec >= min_prec andalso prec < max_prec);
 
 (*Make states using a list of rhss*)
-fun mk_states i min_prec lhs_ID rhss =
+fun mk_states i lhs_ID rhss =
   let fun mk_state (rhs, id, prod_prec) = (lhs_ID, prod_prec, [], rhs, id, i);
   in map mk_state rhss end;
 
@@ -604,11 +604,11 @@
             A = B andalso prec > min_prec andalso prec <= max_prec
           | _ => false) Si;
 
-fun get_states Estate i ii A max_prec =
+fun get_states Estate j A max_prec =
   filter
     (fn (_, _, _, Nonterminal (B, prec) :: _, _, _) => A = B andalso prec <= max_prec
       | _ => false)
-    (Array.sub (Estate, ii));
+    (Array.sub (Estate, j));
 
 
 fun movedot_term c (A, j, ts, Terminal a :: sa, id, i) =
@@ -654,7 +654,7 @@
   let
     fun all_prods_for nt = prods_for prods chains true c [nt];
 
-    fun processS used [] (Si, Sii) = (Si, Sii)
+    fun processS _ [] (Si, Sii) = (Si, Sii)
       | processS used (S :: States) (Si, Sii) =
           (case S of
             (_, _, _, Nonterminal (nt, min_prec) :: _, _, _) =>
@@ -669,12 +669,12 @@
                         let
                           val tk_prods = all_prods_for nt;
                           val States' =
-                            mk_states i min_prec nt (get_RHS' min_prec used_prec tk_prods);
+                            mk_states i nt (get_RHS' min_prec used_prec tk_prods);
                         in (update_prec (nt, min_prec) used, movedot_lambda l S @ States') end
                   | NONE => (*nonterminal is parsed for the first time*)
                       let
                         val tk_prods = all_prods_for nt;
-                        val States' = mk_states i min_prec nt (get_RHS min_prec tk_prods);
+                        val States' = mk_states i nt (get_RHS min_prec tk_prods);
                       in (Inttab.update (nt, (min_prec, [])) used, States') end);
               in
                 processS used' (new_states @ States) (S :: Si, Sii)
@@ -692,7 +692,7 @@
                     val States' = map (movedot_nonterm tt) Slist;
                   in processS used' (States' @ States) (S :: Si, Sii) end
                 else
-                  let val Slist = get_states Estate i j A prec
+                  let val Slist = get_states Estate j A prec
                   in processS used (map (movedot_nonterm tt) Slist @ States) (S :: Si, Sii) end
               end)
   in processS Inttab.empty states ([], []) end;
--- a/src/Pure/System/bash_windows.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/System/bash_windows.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -1,4 +1,4 @@
-(*  Title:      Pure/Concurrent/bash_windows.ML
+(*  Title:      Pure/System/bash_windows.ML
     Author:     Makarius
 
 GNU bash processes, with propagation of interrupts -- Windows version.
--- a/src/Pure/System/isabelle_process.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/System/isabelle_process.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -190,7 +190,7 @@
 
 val init_protocol = uninterruptible (fn _ => fn socket =>
   let
-    val _ = SHA1_Samples.test ()
+    val _ = SHA1.test_samples ()
       handle exn as Fail msg => (Output.physical_stderr (msg ^ "\n"); Exn.reraise exn);
     val _ = Output.physical_stderr Symbol.STX;
 
--- a/src/Pure/System/isabelle_process.scala	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/System/isabelle_process.scala	Fri Mar 18 20:29:50 2016 +0100
@@ -14,15 +14,14 @@
     logic: String = "",
     args: List[String] = Nil,
     modes: List[String] = Nil,
-    secure: Boolean = false,
     receiver: Prover.Receiver = Console.println(_),
     store: Sessions.Store = Sessions.store()): Isabelle_Process =
   {
     val channel = System_Channel()
     val process =
       try {
-        ML_Process(options, logic = logic, args = args, modes = modes, secure = secure,
-          channel = Some(channel), store = store)
+        ML_Process(options, logic = logic, args = args, modes = modes, store = store,
+          channel = Some(channel))
       }
       catch { case exn @ ERROR(_) => channel.accepted(); throw exn }
     process.stdin.close
--- a/src/Pure/Tools/build.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/Tools/build.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -122,7 +122,7 @@
 
 fun build args_file =
   let
-    val _ = SHA1_Samples.test ();
+    val _ = SHA1.test_samples ();
 
     val (symbol_codes, (command_timings, (do_output, (verbose, (browser_info,
           (document_files, (graph_file, (parent_name, (chapter, (name, theories)))))))))) =
--- a/src/Pure/Tools/debugger.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/Tools/debugger.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -189,9 +189,8 @@
     val space = ML_Debugger.debug_name_space (the_debug_state thread_name index);
 
     fun print x =
-      Pretty.from_ML
-        (ML_Pretty.from_polyml
-          (ML_Name_Space.displayVal (x, FixedInt.fromInt (ML_Options.get_print_depth ()), space)));
+      Pretty.from_polyml
+        (ML_Name_Space.displayVal (x, FixedInt.fromInt (ML_Options.get_print_depth ()), space));
     fun print_all () =
       #allVal (ML_Debugger.debug_local_name_space (the_debug_state thread_name index)) ()
       |> sort_by #1 |> map (Pretty.item o single o print o #2)
--- a/src/Pure/Tools/ml_process.scala	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/Tools/ml_process.scala	Fri Mar 18 20:29:50 2016 +0100
@@ -18,7 +18,6 @@
     dirs: List[Path] = Nil,
     modes: List[String] = Nil,
     raw_ml_system: Boolean = false,
-    secure: Boolean = false,
     cwd: JFile = null,
     env: Map[String, String] = Isabelle_System.settings(),
     redirect: Boolean = false,
@@ -67,17 +66,15 @@
     val env_options = Map("ISABELLE_PROCESS_OPTIONS" -> File.standard_path(isabelle_process_options))
     val eval_options = if (heaps.isEmpty) Nil else List("Options.load_default ()")
 
-    val eval_secure = if (secure) List("Secure.set_secure ()") else Nil
-
     val eval_process =
       if (heaps.isEmpty)
         List("PolyML.print_depth 10")
       else
         channel match {
           case None =>
-            List("(default_print_depth 10; Isabelle_Process.init_options ())")
+            List("(ML_Pretty.print_depth 10; Isabelle_Process.init_options ())")
           case Some(ch) =>
-            List("(default_print_depth 10; Isabelle_Process.init_protocol " +
+            List("(ML_Pretty.print_depth 10; Isabelle_Process.init_protocol " +
               ML_Syntax.print_string0(ch.server_name) + ")")
         }
 
@@ -88,7 +85,7 @@
     // bash
     val bash_args =
       Word.explode(Isabelle_System.getenv("ML_OPTIONS")) :::
-      (eval_init ::: eval_modes ::: eval_options ::: eval_secure ::: eval_process).
+      (eval_init ::: eval_modes ::: eval_options ::: eval_process).
         map(eval => List("--eval", eval)).flatten ::: args
 
     Bash.process("""exec "$ML_HOME/poly" -q """ + File.bash_args(bash_args),
--- a/src/Pure/context.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/context.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -169,6 +169,8 @@
 
 val pretty_thy = Pretty.str_list "{" "}" o display_names;
 
+val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o pretty_thy);
+
 fun pretty_abbrev_thy thy =
   let
     val names = display_names thy;
--- a/src/Pure/morphism.ML	Fri Mar 18 12:54:20 2016 +0100
+++ b/src/Pure/morphism.ML	Fri Mar 18 20:29:50 2016 +0100
@@ -71,6 +71,8 @@
 
 fun pretty (Morphism {names, ...}) = Pretty.enum ";" "{" "}" (map Pretty.str (rev names));
 
+val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o pretty);
+
 fun binding (Morphism {binding, ...}) = apply binding;
 fun typ (Morphism {typ, ...}) = apply typ;
 fun term (Morphism {term, ...}) = apply term;