author | wenzelm |

Fri Mar 18 20:29:50 2016 +0100 (2016-03-18) | |

changeset 62670 | 006c057814f1 |

parent 62653 | d3a5b127eb81 |

parent 62669 | c95b76681e65 |

child 62671 | a9ee1f240b81 |

merged

NEWS | file | annotate | diff | revisions | |

src/Pure/General/secure.ML | file | annotate | diff | revisions | |

src/Pure/General/sha1_polyml.ML | file | annotate | diff | revisions | |

src/Pure/General/sha1_samples.ML | file | annotate | diff | revisions | |

src/Pure/ML/install_pp_polyml.ML | file | annotate | diff | revisions |

1.1 --- a/NEWS Fri Mar 18 12:54:20 2016 +0100 1.2 +++ b/NEWS Fri Mar 18 20:29:50 2016 +0100 1.3 @@ -220,6 +220,9 @@ 1.4 1.5 *** ML *** 1.6 1.7 +* Antiquotation @{make_string} is available during Pure bootstrap -- 1.8 +with approximative output quality. 1.9 + 1.10 * Option ML_exception_debugger controls detailed exception trace via the 1.11 Poly/ML debugger. Relevant ML modules need to be compiled beforehand 1.12 with ML_file_debug, or with ML_file and option ML_debugger enabled. Note

2.1 --- a/src/Doc/Isar_Ref/Generic.thy Fri Mar 18 12:54:20 2016 +0100 2.2 +++ b/src/Doc/Isar_Ref/Generic.thy Fri Mar 18 20:29:50 2016 +0100 2.3 @@ -247,24 +247,23 @@ 2.4 2.5 section \<open>The Simplifier \label{sec:simplifier}\<close> 2.6 2.7 -text \<open>The Simplifier performs conditional and unconditional 2.8 - rewriting and uses contextual information: rule declarations in the 2.9 - background theory or local proof context are taken into account, as 2.10 - well as chained facts and subgoal premises (``local assumptions''). 2.11 - There are several general hooks that allow to modify the 2.12 - simplification strategy, or incorporate other proof tools that solve 2.13 - sub-problems, produce rewrite rules on demand etc. 2.14 +text \<open> 2.15 + The Simplifier performs conditional and unconditional rewriting and uses 2.16 + contextual information: rule declarations in the background theory or local 2.17 + proof context are taken into account, as well as chained facts and subgoal 2.18 + premises (``local assumptions''). There are several general hooks that allow 2.19 + to modify the simplification strategy, or incorporate other proof tools that 2.20 + solve sub-problems, produce rewrite rules on demand etc. 2.21 2.22 - The rewriting strategy is always strictly bottom up, except for 2.23 - congruence rules, which are applied while descending into a term. 2.24 - Conditions in conditional rewrite rules are solved recursively 2.25 - before the rewrite rule is applied. 2.26 + The rewriting strategy is always strictly bottom up, except for congruence 2.27 + rules, which are applied while descending into a term. Conditions in 2.28 + conditional rewrite rules are solved recursively before the rewrite rule is 2.29 + applied. 2.30 2.31 - The default Simplifier setup of major object logics (HOL, HOLCF, 2.32 - FOL, ZF) makes the Simplifier ready for immediate use, without 2.33 - engaging into the internal structures. Thus it serves as 2.34 - general-purpose proof tool with the main focus on equational 2.35 - reasoning, and a bit more than that. 2.36 + The default Simplifier setup of major object logics (HOL, HOLCF, FOL, ZF) 2.37 + makes the Simplifier ready for immediate use, without engaging into the 2.38 + internal structures. Thus it serves as general-purpose proof tool with the 2.39 + main focus on equational reasoning, and a bit more than that. 2.40 \<close> 2.41 2.42 2.43 @@ -288,69 +287,63 @@ 2.44 'cong' (() | 'add' | 'del')) ':' @{syntax thmrefs} 2.45 \<close>} 2.46 2.47 - \<^descr> @{method simp} invokes the Simplifier on the first subgoal, 2.48 - after inserting chained facts as additional goal premises; further 2.49 - rule declarations may be included via \<open>(simp add: facts)\<close>. 2.50 - The proof method fails if the subgoal remains unchanged after 2.51 - simplification. 2.52 + \<^descr> @{method simp} invokes the Simplifier on the first subgoal, after 2.53 + inserting chained facts as additional goal premises; further rule 2.54 + declarations may be included via \<open>(simp add: facts)\<close>. The proof method fails 2.55 + if the subgoal remains unchanged after simplification. 2.56 2.57 - Note that the original goal premises and chained facts are subject 2.58 - to simplification themselves, while declarations via \<open>add\<close>/\<open>del\<close> merely follow the policies of the object-logic 2.59 - to extract rewrite rules from theorems, without further 2.60 - simplification. This may lead to slightly different behavior in 2.61 - either case, which might be required precisely like that in some 2.62 - boundary situations to perform the intended simplification step! 2.63 + Note that the original goal premises and chained facts are subject to 2.64 + simplification themselves, while declarations via \<open>add\<close>/\<open>del\<close> merely follow 2.65 + the policies of the object-logic to extract rewrite rules from theorems, 2.66 + without further simplification. This may lead to slightly different behavior 2.67 + in either case, which might be required precisely like that in some boundary 2.68 + situations to perform the intended simplification step! 2.69 2.70 \<^medskip> 2.71 - The \<open>only\<close> modifier first removes all other rewrite 2.72 - rules, looper tactics (including split rules), congruence rules, and 2.73 - then behaves like \<open>add\<close>. Implicit solvers remain, which means 2.74 - that trivial rules like reflexivity or introduction of \<open>True\<close> are available to solve the simplified subgoals, but also 2.75 - non-trivial tools like linear arithmetic in HOL. The latter may 2.76 - lead to some surprise of the meaning of ``only'' in Isabelle/HOL 2.77 - compared to English! 2.78 + The \<open>only\<close> modifier first removes all other rewrite rules, looper tactics 2.79 + (including split rules), congruence rules, and then behaves like \<open>add\<close>. 2.80 + Implicit solvers remain, which means that trivial rules like reflexivity or 2.81 + introduction of \<open>True\<close> are available to solve the simplified subgoals, but 2.82 + also non-trivial tools like linear arithmetic in HOL. The latter may lead to 2.83 + some surprise of the meaning of ``only'' in Isabelle/HOL compared to 2.84 + English! 2.85 2.86 \<^medskip> 2.87 - The \<open>split\<close> modifiers add or delete rules for the 2.88 - Splitter (see also \secref{sec:simp-strategies} on the looper). 2.89 - This works only if the Simplifier method has been properly setup to 2.90 - include the Splitter (all major object logics such HOL, HOLCF, FOL, 2.91 - ZF do this already). 2.92 + The \<open>split\<close> modifiers add or delete rules for the Splitter (see also 2.93 + \secref{sec:simp-strategies} on the looper). This works only if the 2.94 + Simplifier method has been properly setup to include the Splitter (all major 2.95 + object logics such HOL, HOLCF, FOL, ZF do this already). 2.96 2.97 There is also a separate @{method_ref split} method available for 2.98 - single-step case splitting. The effect of repeatedly applying 2.99 - \<open>(split thms)\<close> can be imitated by ``\<open>(simp only: 2.100 - split: thms)\<close>''. 2.101 + single-step case splitting. The effect of repeatedly applying \<open>(split thms)\<close> 2.102 + can be imitated by ``\<open>(simp only: split: thms)\<close>''. 2.103 2.104 \<^medskip> 2.105 - The \<open>cong\<close> modifiers add or delete Simplifier 2.106 - congruence rules (see also \secref{sec:simp-rules}); the default is 2.107 - to add. 2.108 + The \<open>cong\<close> modifiers add or delete Simplifier congruence rules (see also 2.109 + \secref{sec:simp-rules}); the default is to add. 2.110 2.111 - \<^descr> @{method simp_all} is similar to @{method simp}, but acts on 2.112 - all goals, working backwards from the last to the first one as usual 2.113 - in Isabelle.\<^footnote>\<open>The order is irrelevant for goals without 2.114 - schematic variables, so simplification might actually be performed 2.115 - in parallel here.\<close> 2.116 + \<^descr> @{method simp_all} is similar to @{method simp}, but acts on all goals, 2.117 + working backwards from the last to the first one as usual in Isabelle.\<^footnote>\<open>The 2.118 + order is irrelevant for goals without schematic variables, so simplification 2.119 + might actually be performed in parallel here.\<close> 2.120 2.121 - Chained facts are inserted into all subgoals, before the 2.122 - simplification process starts. Further rule declarations are the 2.123 - same as for @{method simp}. 2.124 + Chained facts are inserted into all subgoals, before the simplification 2.125 + process starts. Further rule declarations are the same as for @{method 2.126 + simp}. 2.127 2.128 The proof method fails if all subgoals remain unchanged after 2.129 simplification. 2.130 2.131 - \<^descr> @{attribute simp_depth_limit} limits the number of recursive 2.132 - invocations of the Simplifier during conditional rewriting. 2.133 + \<^descr> @{attribute simp_depth_limit} limits the number of recursive invocations 2.134 + of the Simplifier during conditional rewriting. 2.135 2.136 2.137 - By default the Simplifier methods above take local assumptions fully 2.138 - into account, using equational assumptions in the subsequent 2.139 - normalization process, or simplifying assumptions themselves. 2.140 - Further options allow to fine-tune the behavior of the Simplifier 2.141 - in this respect, corresponding to a variety of ML tactics as 2.142 - follows.\<^footnote>\<open>Unlike the corresponding Isar proof methods, the 2.143 - ML tactics do not insist in changing the goal state.\<close> 2.144 + By default the Simplifier methods above take local assumptions fully into 2.145 + account, using equational assumptions in the subsequent normalization 2.146 + process, or simplifying assumptions themselves. Further options allow to 2.147 + fine-tune the behavior of the Simplifier in this respect, corresponding to a 2.148 + variety of ML tactics as follows.\<^footnote>\<open>Unlike the corresponding Isar proof 2.149 + methods, the ML tactics do not insist in changing the goal state.\<close> 2.150 2.151 \begin{center} 2.152 \small 2.153 @@ -358,24 +351,21 @@ 2.154 \hline 2.155 Isar method & ML tactic & behavior \\\hline 2.156 2.157 - \<open>(simp (no_asm))\<close> & @{ML simp_tac} & assumptions are ignored 2.158 - completely \\\hline 2.159 + \<open>(simp (no_asm))\<close> & @{ML simp_tac} & assumptions are ignored completely 2.160 + \\\hline 2.161 2.162 - \<open>(simp (no_asm_simp))\<close> & @{ML asm_simp_tac} & assumptions 2.163 - are used in the simplification of the conclusion but are not 2.164 - themselves simplified \\\hline 2.165 + \<open>(simp (no_asm_simp))\<close> & @{ML asm_simp_tac} & assumptions are used in the 2.166 + simplification of the conclusion but are not themselves simplified \\\hline 2.167 2.168 - \<open>(simp (no_asm_use))\<close> & @{ML full_simp_tac} & assumptions 2.169 - are simplified but are not used in the simplification of each other 2.170 - or the conclusion \\\hline 2.171 + \<open>(simp (no_asm_use))\<close> & @{ML full_simp_tac} & assumptions are simplified but 2.172 + are not used in the simplification of each other or the conclusion \\\hline 2.173 2.174 - \<open>(simp)\<close> & @{ML asm_full_simp_tac} & assumptions are used in 2.175 - the simplification of the conclusion and to simplify other 2.176 - assumptions \\\hline 2.177 + \<open>(simp)\<close> & @{ML asm_full_simp_tac} & assumptions are used in the 2.178 + simplification of the conclusion and to simplify other assumptions \\\hline 2.179 2.180 - \<open>(simp (asm_lr))\<close> & @{ML asm_lr_simp_tac} & compatibility 2.181 - mode: an assumption is only used for simplifying assumptions which 2.182 - are to the right of it \\\hline 2.183 + \<open>(simp (asm_lr))\<close> & @{ML asm_lr_simp_tac} & compatibility mode: an 2.184 + assumption is only used for simplifying assumptions which are to the right 2.185 + of it \\\hline 2.186 2.187 \end{tabular} 2.188 \end{center} 2.189 @@ -384,18 +374,19 @@ 2.190 2.191 subsubsection \<open>Examples\<close> 2.192 2.193 -text \<open>We consider basic algebraic simplifications in Isabelle/HOL. 2.194 - The rather trivial goal @{prop "0 + (x + 0) = x + 0 + 0"} looks like 2.195 - a good candidate to be solved by a single call of @{method simp}: 2.196 +text \<open> 2.197 + We consider basic algebraic simplifications in Isabelle/HOL. The rather 2.198 + trivial goal @{prop "0 + (x + 0) = x + 0 + 0"} looks like a good candidate 2.199 + to be solved by a single call of @{method simp}: 2.200 \<close> 2.201 2.202 lemma "0 + (x + 0) = x + 0 + 0" apply simp? oops 2.203 2.204 -text \<open>The above attempt \<^emph>\<open>fails\<close>, because @{term "0"} and @{term 2.205 - "op +"} in the HOL library are declared as generic type class 2.206 - operations, without stating any algebraic laws yet. More specific 2.207 - types are required to get access to certain standard simplifications 2.208 - of the theory context, e.g.\ like this:\<close> 2.209 +text \<open> 2.210 + The above attempt \<^emph>\<open>fails\<close>, because @{term "0"} and @{term "op +"} in the 2.211 + HOL library are declared as generic type class operations, without stating 2.212 + any algebraic laws yet. More specific types are required to get access to 2.213 + certain standard simplifications of the theory context, e.g.\ like this:\<close> 2.214 2.215 lemma fixes x :: nat shows "0 + (x + 0) = x + 0 + 0" by simp 2.216 lemma fixes x :: int shows "0 + (x + 0) = x + 0 + 0" by simp 2.217 @@ -403,39 +394,38 @@ 2.218 2.219 text \<open> 2.220 \<^medskip> 2.221 - In many cases, assumptions of a subgoal are also needed in 2.222 - the simplification process. For example: 2.223 + In many cases, assumptions of a subgoal are also needed in the 2.224 + simplification process. For example: 2.225 \<close> 2.226 2.227 lemma fixes x :: nat shows "x = 0 \<Longrightarrow> x + x = 0" by simp 2.228 lemma fixes x :: nat assumes "x = 0" shows "x + x = 0" apply simp oops 2.229 lemma fixes x :: nat assumes "x = 0" shows "x + x = 0" using assms by simp 2.230 2.231 -text \<open>As seen above, local assumptions that shall contribute to 2.232 - simplification need to be part of the subgoal already, or indicated 2.233 - explicitly for use by the subsequent method invocation. Both too 2.234 - little or too much information can make simplification fail, for 2.235 - different reasons. 2.236 +text \<open> 2.237 + As seen above, local assumptions that shall contribute to simplification 2.238 + need to be part of the subgoal already, or indicated explicitly for use by 2.239 + the subsequent method invocation. Both too little or too much information 2.240 + can make simplification fail, for different reasons. 2.241 2.242 - In the next example the malicious assumption @{prop "\<And>x::nat. f x = 2.243 - g (f (g x))"} does not contribute to solve the problem, but makes 2.244 - the default @{method simp} method loop: the rewrite rule \<open>f 2.245 - ?x \<equiv> g (f (g ?x))\<close> extracted from the assumption does not 2.246 - terminate. The Simplifier notices certain simple forms of 2.247 - nontermination, but not this one. The problem can be solved 2.248 - nonetheless, by ignoring assumptions via special options as 2.249 - explained before: 2.250 + In the next example the malicious assumption @{prop "\<And>x::nat. f x = g (f (g 2.251 + x))"} does not contribute to solve the problem, but makes the default 2.252 + @{method simp} method loop: the rewrite rule \<open>f ?x \<equiv> g (f (g ?x))\<close> extracted 2.253 + from the assumption does not terminate. The Simplifier notices certain 2.254 + simple forms of nontermination, but not this one. The problem can be solved 2.255 + nonetheless, by ignoring assumptions via special options as explained 2.256 + before: 2.257 \<close> 2.258 2.259 lemma "(\<And>x::nat. f x = g (f (g x))) \<Longrightarrow> f 0 = f 0 + 0" 2.260 by (simp (no_asm)) 2.261 2.262 -text \<open>The latter form is typical for long unstructured proof 2.263 - scripts, where the control over the goal content is limited. In 2.264 - structured proofs it is usually better to avoid pushing too many 2.265 - facts into the goal state in the first place. Assumptions in the 2.266 - Isar proof context do not intrude the reasoning if not used 2.267 - explicitly. This is illustrated for a toplevel statement and a 2.268 +text \<open> 2.269 + The latter form is typical for long unstructured proof scripts, where the 2.270 + control over the goal content is limited. In structured proofs it is usually 2.271 + better to avoid pushing too many facts into the goal state in the first 2.272 + place. Assumptions in the Isar proof context do not intrude the reasoning if 2.273 + not used explicitly. This is illustrated for a toplevel statement and a 2.274 local proof body as follows: 2.275 \<close> 2.276 2.277 @@ -451,26 +441,26 @@ 2.278 2.279 text \<open> 2.280 \<^medskip> 2.281 - Because assumptions may simplify each other, there 2.282 - can be very subtle cases of nontermination. For example, the regular 2.283 - @{method simp} method applied to @{prop "P (f x) \<Longrightarrow> y = x \<Longrightarrow> f x = f y 2.284 - \<Longrightarrow> Q"} gives rise to the infinite reduction sequence 2.285 + Because assumptions may simplify each other, there can be very subtle cases 2.286 + of nontermination. For example, the regular @{method simp} method applied to 2.287 + @{prop "P (f x) \<Longrightarrow> y = x \<Longrightarrow> f x = f y \<Longrightarrow> Q"} gives rise to the infinite 2.288 + reduction sequence 2.289 \[ 2.290 \<open>P (f x)\<close> \stackrel{\<open>f x \<equiv> f y\<close>}{\longmapsto} 2.291 \<open>P (f y)\<close> \stackrel{\<open>y \<equiv> x\<close>}{\longmapsto} 2.292 \<open>P (f x)\<close> \stackrel{\<open>f x \<equiv> f y\<close>}{\longmapsto} \cdots 2.293 \] 2.294 - whereas applying the same to @{prop "y = x \<Longrightarrow> f x = f y \<Longrightarrow> P (f x) \<Longrightarrow> 2.295 - Q"} terminates (without solving the goal): 2.296 + whereas applying the same to @{prop "y = x \<Longrightarrow> f x = f y \<Longrightarrow> P (f x) \<Longrightarrow> Q"} 2.297 + terminates (without solving the goal): 2.298 \<close> 2.299 2.300 lemma "y = x \<Longrightarrow> f x = f y \<Longrightarrow> P (f x) \<Longrightarrow> Q" 2.301 apply simp 2.302 oops 2.303 2.304 -text \<open>See also \secref{sec:simp-trace} for options to enable 2.305 - Simplifier trace mode, which often helps to diagnose problems with 2.306 - rewrite systems. 2.307 +text \<open> 2.308 + See also \secref{sec:simp-trace} for options to enable Simplifier trace 2.309 + mode, which often helps to diagnose problems with rewrite systems. 2.310 \<close> 2.311 2.312 2.313 @@ -491,200 +481,180 @@ 2.314 @@{command print_simpset} ('!'?) 2.315 \<close>} 2.316 2.317 - \<^descr> @{attribute simp} declares rewrite rules, by adding or 2.318 - deleting them from the simpset within the theory or proof context. 2.319 - Rewrite rules are theorems expressing some form of equality, for 2.320 - example: 2.321 + \<^descr> @{attribute simp} declares rewrite rules, by adding or deleting them from 2.322 + the simpset within the theory or proof context. Rewrite rules are theorems 2.323 + expressing some form of equality, for example: 2.324 2.325 \<open>Suc ?m + ?n = ?m + Suc ?n\<close> \\ 2.326 \<open>?P \<and> ?P \<longleftrightarrow> ?P\<close> \\ 2.327 \<open>?A \<union> ?B \<equiv> {x. x \<in> ?A \<or> x \<in> ?B}\<close> 2.328 2.329 \<^medskip> 2.330 - Conditional rewrites such as \<open>?m < ?n \<Longrightarrow> ?m div ?n = 0\<close> are 2.331 - also permitted; the conditions can be arbitrary formulas. 2.332 + Conditional rewrites such as \<open>?m < ?n \<Longrightarrow> ?m div ?n = 0\<close> are also permitted; 2.333 + the conditions can be arbitrary formulas. 2.334 2.335 \<^medskip> 2.336 - Internally, all rewrite rules are translated into Pure 2.337 - equalities, theorems with conclusion \<open>lhs \<equiv> rhs\<close>. The 2.338 - simpset contains a function for extracting equalities from arbitrary 2.339 - theorems, which is usually installed when the object-logic is 2.340 - configured initially. For example, \<open>\<not> ?x \<in> {}\<close> could be 2.341 - turned into \<open>?x \<in> {} \<equiv> False\<close>. Theorems that are declared as 2.342 - @{attribute simp} and local assumptions within a goal are treated 2.343 - uniformly in this respect. 2.344 + Internally, all rewrite rules are translated into Pure equalities, theorems 2.345 + with conclusion \<open>lhs \<equiv> rhs\<close>. The simpset contains a function for extracting 2.346 + equalities from arbitrary theorems, which is usually installed when the 2.347 + object-logic is configured initially. For example, \<open>\<not> ?x \<in> {}\<close> could be 2.348 + turned into \<open>?x \<in> {} \<equiv> False\<close>. Theorems that are declared as @{attribute 2.349 + simp} and local assumptions within a goal are treated uniformly in this 2.350 + respect. 2.351 2.352 - The Simplifier accepts the following formats for the \<open>lhs\<close> 2.353 - term: 2.354 + The Simplifier accepts the following formats for the \<open>lhs\<close> term: 2.355 2.356 - \<^enum> First-order patterns, considering the sublanguage of 2.357 - application of constant operators to variable operands, without 2.358 - \<open>\<lambda>\<close>-abstractions or functional variables. 2.359 - For example: 2.360 + \<^enum> First-order patterns, considering the sublanguage of application of 2.361 + constant operators to variable operands, without \<open>\<lambda>\<close>-abstractions or 2.362 + functional variables. For example: 2.363 2.364 \<open>(?x + ?y) + ?z \<equiv> ?x + (?y + ?z)\<close> \\ 2.365 \<open>f (f ?x ?y) ?z \<equiv> f ?x (f ?y ?z)\<close> 2.366 2.367 - \<^enum> Higher-order patterns in the sense of @{cite "nipkow-patterns"}. 2.368 - These are terms in \<open>\<beta>\<close>-normal form (this will always be the 2.369 - case unless you have done something strange) where each occurrence 2.370 - of an unknown is of the form \<open>?F x\<^sub>1 \<dots> x\<^sub>n\<close>, where the 2.371 - \<open>x\<^sub>i\<close> are distinct bound variables. 2.372 + \<^enum> Higher-order patterns in the sense of @{cite "nipkow-patterns"}. These 2.373 + are terms in \<open>\<beta>\<close>-normal form (this will always be the case unless you have 2.374 + done something strange) where each occurrence of an unknown is of the form 2.375 + \<open>?F x\<^sub>1 \<dots> x\<^sub>n\<close>, where the \<open>x\<^sub>i\<close> are distinct bound variables. 2.376 2.377 - For example, \<open>(\<forall>x. ?P x \<and> ?Q x) \<equiv> (\<forall>x. ?P x) \<and> (\<forall>x. ?Q x)\<close> 2.378 - or its symmetric form, since the \<open>rhs\<close> is also a 2.379 - higher-order pattern. 2.380 + For example, \<open>(\<forall>x. ?P x \<and> ?Q x) \<equiv> (\<forall>x. ?P x) \<and> (\<forall>x. ?Q x)\<close> or its 2.381 + symmetric form, since the \<open>rhs\<close> is also a higher-order pattern. 2.382 2.383 - \<^enum> Physical first-order patterns over raw \<open>\<lambda>\<close>-term 2.384 - structure without \<open>\<alpha>\<beta>\<eta>\<close>-equality; abstractions and bound 2.385 - variables are treated like quasi-constant term material. 2.386 + \<^enum> Physical first-order patterns over raw \<open>\<lambda>\<close>-term structure without 2.387 + \<open>\<alpha>\<beta>\<eta>\<close>-equality; abstractions and bound variables are treated like 2.388 + quasi-constant term material. 2.389 2.390 - For example, the rule \<open>?f ?x \<in> range ?f = True\<close> rewrites the 2.391 - term \<open>g a \<in> range g\<close> to \<open>True\<close>, but will fail to 2.392 - match \<open>g (h b) \<in> range (\<lambda>x. g (h x))\<close>. However, offending 2.393 - subterms (in our case \<open>?f ?x\<close>, which is not a pattern) can 2.394 - be replaced by adding new variables and conditions like this: \<open>?y = ?f ?x \<Longrightarrow> ?y \<in> range ?f = True\<close> is acceptable as a conditional 2.395 - rewrite rule of the second category since conditions can be 2.396 - arbitrary terms. 2.397 + For example, the rule \<open>?f ?x \<in> range ?f = True\<close> rewrites the term \<open>g a \<in> 2.398 + range g\<close> to \<open>True\<close>, but will fail to match \<open>g (h b) \<in> range (\<lambda>x. g (h 2.399 + x))\<close>. However, offending subterms (in our case \<open>?f ?x\<close>, which is not a 2.400 + pattern) can be replaced by adding new variables and conditions like this: 2.401 + \<open>?y = ?f ?x \<Longrightarrow> ?y \<in> range ?f = True\<close> is acceptable as a conditional rewrite 2.402 + rule of the second category since conditions can be arbitrary terms. 2.403 2.404 \<^descr> @{attribute split} declares case split rules. 2.405 2.406 - \<^descr> @{attribute cong} declares congruence rules to the Simplifier 2.407 - context. 2.408 + \<^descr> @{attribute cong} declares congruence rules to the Simplifier context. 2.409 2.410 Congruence rules are equalities of the form @{text [display] 2.411 "\<dots> \<Longrightarrow> f ?x\<^sub>1 \<dots> ?x\<^sub>n = f ?y\<^sub>1 \<dots> ?y\<^sub>n"} 2.412 2.413 - This controls the simplification of the arguments of \<open>f\<close>. For 2.414 - example, some arguments can be simplified under additional 2.415 - assumptions: @{text [display] "?P\<^sub>1 \<longleftrightarrow> ?Q\<^sub>1 \<Longrightarrow> (?Q\<^sub>1 \<Longrightarrow> ?P\<^sub>2 \<longleftrightarrow> ?Q\<^sub>2) \<Longrightarrow> 2.416 - (?P\<^sub>1 \<longrightarrow> ?P\<^sub>2) \<longleftrightarrow> (?Q\<^sub>1 \<longrightarrow> ?Q\<^sub>2)"} 2.417 + This controls the simplification of the arguments of \<open>f\<close>. For example, some 2.418 + arguments can be simplified under additional assumptions: 2.419 + @{text [display] 2.420 + "?P\<^sub>1 \<longleftrightarrow> ?Q\<^sub>1 \<Longrightarrow> 2.421 + (?Q\<^sub>1 \<Longrightarrow> ?P\<^sub>2 \<longleftrightarrow> ?Q\<^sub>2) \<Longrightarrow> 2.422 + (?P\<^sub>1 \<longrightarrow> ?P\<^sub>2) \<longleftrightarrow> (?Q\<^sub>1 \<longrightarrow> ?Q\<^sub>2)"} 2.423 2.424 - Given this rule, the Simplifier assumes \<open>?Q\<^sub>1\<close> and extracts 2.425 - rewrite rules from it when simplifying \<open>?P\<^sub>2\<close>. Such local 2.426 - assumptions are effective for rewriting formulae such as \<open>x = 2.427 - 0 \<longrightarrow> y + x = y\<close>. 2.428 + Given this rule, the Simplifier assumes \<open>?Q\<^sub>1\<close> and extracts rewrite rules 2.429 + from it when simplifying \<open>?P\<^sub>2\<close>. Such local assumptions are effective for 2.430 + rewriting formulae such as \<open>x = 0 \<longrightarrow> y + x = y\<close>. 2.431 2.432 %FIXME 2.433 %The local assumptions are also provided as theorems to the solver; 2.434 %see \secref{sec:simp-solver} below. 2.435 2.436 \<^medskip> 2.437 - The following congruence rule for bounded quantifiers also 2.438 - supplies contextual information --- about the bound variable: 2.439 - @{text [display] "(?A = ?B) \<Longrightarrow> (\<And>x. x \<in> ?B \<Longrightarrow> ?P x \<longleftrightarrow> ?Q x) \<Longrightarrow> 2.440 + The following congruence rule for bounded quantifiers also supplies 2.441 + contextual information --- about the bound variable: @{text [display] 2.442 + "(?A = ?B) \<Longrightarrow> 2.443 + (\<And>x. x \<in> ?B \<Longrightarrow> ?P x \<longleftrightarrow> ?Q x) \<Longrightarrow> 2.444 (\<forall>x \<in> ?A. ?P x) \<longleftrightarrow> (\<forall>x \<in> ?B. ?Q x)"} 2.445 2.446 \<^medskip> 2.447 - This congruence rule for conditional expressions can 2.448 - supply contextual information for simplifying the arms: 2.449 - @{text [display] "?p = ?q \<Longrightarrow> (?q \<Longrightarrow> ?a = ?c) \<Longrightarrow> (\<not> ?q \<Longrightarrow> ?b = ?d) \<Longrightarrow> 2.450 + This congruence rule for conditional expressions can supply contextual 2.451 + information for simplifying the arms: @{text [display] 2.452 + "?p = ?q \<Longrightarrow> 2.453 + (?q \<Longrightarrow> ?a = ?c) \<Longrightarrow> 2.454 + (\<not> ?q \<Longrightarrow> ?b = ?d) \<Longrightarrow> 2.455 (if ?p then ?a else ?b) = (if ?q then ?c else ?d)"} 2.456 2.457 - A congruence rule can also \<^emph>\<open>prevent\<close> simplification of some 2.458 - arguments. Here is an alternative congruence rule for conditional 2.459 - expressions that conforms to non-strict functional evaluation: 2.460 - @{text [display] "?p = ?q \<Longrightarrow> (if ?p then ?a else ?b) = (if ?q then ?a else ?b)"} 2.461 + A congruence rule can also \<^emph>\<open>prevent\<close> simplification of some arguments. Here 2.462 + is an alternative congruence rule for conditional expressions that conforms 2.463 + to non-strict functional evaluation: @{text [display] 2.464 + "?p = ?q \<Longrightarrow> 2.465 + (if ?p then ?a else ?b) = (if ?q then ?a else ?b)"} 2.466 2.467 - Only the first argument is simplified; the others remain unchanged. 2.468 - This can make simplification much faster, but may require an extra 2.469 - case split over the condition \<open>?q\<close> to prove the goal. 2.470 - 2.471 - \<^descr> @{command "print_simpset"} prints the collection of rules declared 2.472 - to the Simplifier, which is also known as ``simpset'' internally; the 2.473 - ``\<open>!\<close>'' option indicates extra verbosity. 2.474 + Only the first argument is simplified; the others remain unchanged. This can 2.475 + make simplification much faster, but may require an extra case split over 2.476 + the condition \<open>?q\<close> to prove the goal. 2.477 2.478 - For historical reasons, simpsets may occur independently from the 2.479 - current context, but are conceptually dependent on it. When the 2.480 - Simplifier is invoked via one of its main entry points in the Isar 2.481 - source language (as proof method \secref{sec:simp-meth} or rule 2.482 - attribute \secref{sec:simp-meth}), its simpset is derived from the 2.483 - current proof context, and carries a back-reference to that for 2.484 - other tools that might get invoked internally (e.g.\ simplification 2.485 - procedures \secref{sec:simproc}). A mismatch of the context of the 2.486 - simpset and the context of the problem being simplified may lead to 2.487 - unexpected results. 2.488 + \<^descr> @{command "print_simpset"} prints the collection of rules declared to the 2.489 + Simplifier, which is also known as ``simpset'' internally; the ``\<open>!\<close>'' 2.490 + option indicates extra verbosity. 2.491 2.492 - 2.493 - The implicit simpset of the theory context is propagated 2.494 - monotonically through the theory hierarchy: forming a new theory, 2.495 - the union of the simpsets of its imports are taken as starting 2.496 - point. Also note that definitional packages like @{command 2.497 - "datatype"}, @{command "primrec"}, @{command "fun"} routinely 2.498 - declare Simplifier rules to the target context, while plain 2.499 - @{command "definition"} is an exception in \<^emph>\<open>not\<close> declaring 2.500 + The implicit simpset of the theory context is propagated monotonically 2.501 + through the theory hierarchy: forming a new theory, the union of the 2.502 + simpsets of its imports are taken as starting point. Also note that 2.503 + definitional packages like @{command "datatype"}, @{command "primrec"}, 2.504 + @{command "fun"} routinely declare Simplifier rules to the target context, 2.505 + while plain @{command "definition"} is an exception in \<^emph>\<open>not\<close> declaring 2.506 anything. 2.507 2.508 \<^medskip> 2.509 - It is up the user to manipulate the current simpset further 2.510 - by explicitly adding or deleting theorems as simplification rules, 2.511 - or installing other tools via simplification procedures 2.512 - (\secref{sec:simproc}). Good simpsets are hard to design. Rules 2.513 - that obviously simplify, like \<open>?n + 0 \<equiv> ?n\<close> are good 2.514 - candidates for the implicit simpset, unless a special 2.515 - non-normalizing behavior of certain operations is intended. More 2.516 - specific rules (such as distributive laws, which duplicate subterms) 2.517 - should be added only for specific proof steps. Conversely, 2.518 - sometimes a rule needs to be deleted just for some part of a proof. 2.519 - The need of frequent additions or deletions may indicate a poorly 2.520 - designed simpset. 2.521 + It is up the user to manipulate the current simpset further by explicitly 2.522 + adding or deleting theorems as simplification rules, or installing other 2.523 + tools via simplification procedures (\secref{sec:simproc}). Good simpsets 2.524 + are hard to design. Rules that obviously simplify, like \<open>?n + 0 \<equiv> ?n\<close> are 2.525 + good candidates for the implicit simpset, unless a special non-normalizing 2.526 + behavior of certain operations is intended. More specific rules (such as 2.527 + distributive laws, which duplicate subterms) should be added only for 2.528 + specific proof steps. Conversely, sometimes a rule needs to be deleted just 2.529 + for some part of a proof. The need of frequent additions or deletions may 2.530 + indicate a poorly designed simpset. 2.531 2.532 \begin{warn} 2.533 - The union of simpsets from theory imports (as described above) is 2.534 - not always a good starting point for the new theory. If some 2.535 - ancestors have deleted simplification rules because they are no 2.536 - longer wanted, while others have left those rules in, then the union 2.537 - will contain the unwanted rules, and thus have to be deleted again 2.538 - in the theory body. 2.539 + The union of simpsets from theory imports (as described above) is not always 2.540 + a good starting point for the new theory. If some ancestors have deleted 2.541 + simplification rules because they are no longer wanted, while others have 2.542 + left those rules in, then the union will contain the unwanted rules, and 2.543 + thus have to be deleted again in the theory body. 2.544 \end{warn} 2.545 \<close> 2.546 2.547 2.548 subsection \<open>Ordered rewriting with permutative rules\<close> 2.549 2.550 -text \<open>A rewrite rule is \<^emph>\<open>permutative\<close> if the left-hand side and 2.551 - right-hand side are the equal up to renaming of variables. The most 2.552 - common permutative rule is commutativity: \<open>?x + ?y = ?y + 2.553 - ?x\<close>. Other examples include \<open>(?x - ?y) - ?z = (?x - ?z) - 2.554 - ?y\<close> in arithmetic and \<open>insert ?x (insert ?y ?A) = insert ?y 2.555 - (insert ?x ?A)\<close> for sets. Such rules are common enough to merit 2.556 - special attention. 2.557 +text \<open> 2.558 + A rewrite rule is \<^emph>\<open>permutative\<close> if the left-hand side and right-hand side 2.559 + are the equal up to renaming of variables. The most common permutative rule 2.560 + is commutativity: \<open>?x + ?y = ?y + ?x\<close>. Other examples include \<open>(?x - ?y) - 2.561 + ?z = (?x - ?z) - ?y\<close> in arithmetic and \<open>insert ?x (insert ?y ?A) = insert ?y 2.562 + (insert ?x ?A)\<close> for sets. Such rules are common enough to merit special 2.563 + attention. 2.564 2.565 - Because ordinary rewriting loops given such rules, the Simplifier 2.566 - employs a special strategy, called \<^emph>\<open>ordered rewriting\<close>. 2.567 - Permutative rules are detected and only applied if the rewriting 2.568 - step decreases the redex wrt.\ a given term ordering. For example, 2.569 - commutativity rewrites \<open>b + a\<close> to \<open>a + b\<close>, but then 2.570 - stops, because the redex cannot be decreased further in the sense of 2.571 - the term ordering. 2.572 + Because ordinary rewriting loops given such rules, the Simplifier employs a 2.573 + special strategy, called \<^emph>\<open>ordered rewriting\<close>. Permutative rules are 2.574 + detected and only applied if the rewriting step decreases the redex wrt.\ a 2.575 + given term ordering. For example, commutativity rewrites \<open>b + a\<close> to \<open>a + b\<close>, 2.576 + but then stops, because the redex cannot be decreased further in the sense 2.577 + of the term ordering. 2.578 2.579 - The default is lexicographic ordering of term structure, but this 2.580 - could be also changed locally for special applications via 2.581 - @{index_ML Simplifier.set_termless} in Isabelle/ML. 2.582 + The default is lexicographic ordering of term structure, but this could be 2.583 + also changed locally for special applications via @{index_ML 2.584 + Simplifier.set_termless} in Isabelle/ML. 2.585 2.586 \<^medskip> 2.587 - Permutative rewrite rules are declared to the Simplifier 2.588 - just like other rewrite rules. Their special status is recognized 2.589 - automatically, and their application is guarded by the term ordering 2.590 - accordingly.\<close> 2.591 + Permutative rewrite rules are declared to the Simplifier just like other 2.592 + rewrite rules. Their special status is recognized automatically, and their 2.593 + application is guarded by the term ordering accordingly. 2.594 +\<close> 2.595 2.596 2.597 subsubsection \<open>Rewriting with AC operators\<close> 2.598 2.599 -text \<open>Ordered rewriting is particularly effective in the case of 2.600 - associative-commutative operators. (Associativity by itself is not 2.601 - permutative.) When dealing with an AC-operator \<open>f\<close>, keep 2.602 - the following points in mind: 2.603 +text \<open> 2.604 + Ordered rewriting is particularly effective in the case of 2.605 + associative-commutative operators. (Associativity by itself is not 2.606 + permutative.) When dealing with an AC-operator \<open>f\<close>, keep the following 2.607 + points in mind: 2.608 2.609 - \<^item> The associative law must always be oriented from left to 2.610 - right, namely \<open>f (f x y) z = f x (f y z)\<close>. The opposite 2.611 - orientation, if used with commutativity, leads to looping in 2.612 - conjunction with the standard term order. 2.613 + \<^item> The associative law must always be oriented from left to right, namely 2.614 + \<open>f (f x y) z = f x (f y z)\<close>. The opposite orientation, if used with 2.615 + commutativity, leads to looping in conjunction with the standard term 2.616 + order. 2.617 2.618 - \<^item> To complete your set of rewrite rules, you must add not just 2.619 - associativity (A) and commutativity (C) but also a derived rule 2.620 - \<^emph>\<open>left-commutativity\<close> (LC): \<open>f x (f y z) = f y (f x z)\<close>. 2.621 - 2.622 + \<^item> To complete your set of rewrite rules, you must add not just 2.623 + associativity (A) and commutativity (C) but also a derived rule 2.624 + \<^emph>\<open>left-commutativity\<close> (LC): \<open>f x (f y z) = f y (f x z)\<close>. 2.625 2.626 Ordered rewriting with the combination of A, C, and LC sorts a term 2.627 lexicographically --- the rewriting engine imitates bubble-sort. 2.628 @@ -704,9 +674,11 @@ 2.629 2.630 lemmas AC_rules = assoc commute left_commute 2.631 2.632 -text \<open>Thus the Simplifier is able to establish equalities with 2.633 - arbitrary permutations of subterms, by normalizing to a common 2.634 - standard form. For example:\<close> 2.635 +text \<open> 2.636 + Thus the Simplifier is able to establish equalities with arbitrary 2.637 + permutations of subterms, by normalizing to a common standard form. For 2.638 + example: 2.639 +\<close> 2.640 2.641 lemma "(b \<bullet> c) \<bullet> a = xxx" 2.642 apply (simp only: AC_rules) 2.643 @@ -719,10 +691,11 @@ 2.644 2.645 end 2.646 2.647 -text \<open>Martin and Nipkow @{cite "martin-nipkow"} discuss the theory and 2.648 - give many examples; other algebraic structures are amenable to 2.649 - ordered rewriting, such as Boolean rings. The Boyer-Moore theorem 2.650 - prover @{cite bm88book} also employs ordered rewriting. 2.651 +text \<open> 2.652 + Martin and Nipkow @{cite "martin-nipkow"} discuss the theory and give many 2.653 + examples; other algebraic structures are amenable to ordered rewriting, such 2.654 + as Boolean rings. The Boyer-Moore theorem prover @{cite bm88book} also 2.655 + employs ordered rewriting. 2.656 \<close> 2.657 2.658 2.659 @@ -761,29 +734,29 @@ 2.660 These attributes and configurations options control various aspects of 2.661 Simplifier tracing and debugging. 2.662 2.663 - \<^descr> @{attribute simp_trace} makes the Simplifier output internal 2.664 - operations. This includes rewrite steps, but also bookkeeping like 2.665 - modifications of the simpset. 2.666 + \<^descr> @{attribute simp_trace} makes the Simplifier output internal operations. 2.667 + This includes rewrite steps, but also bookkeeping like modifications of the 2.668 + simpset. 2.669 2.670 - \<^descr> @{attribute simp_trace_depth_limit} limits the effect of 2.671 - @{attribute simp_trace} to the given depth of recursive Simplifier 2.672 - invocations (when solving conditions of rewrite rules). 2.673 + \<^descr> @{attribute simp_trace_depth_limit} limits the effect of @{attribute 2.674 + simp_trace} to the given depth of recursive Simplifier invocations (when 2.675 + solving conditions of rewrite rules). 2.676 2.677 - \<^descr> @{attribute simp_debug} makes the Simplifier output some extra 2.678 - information about internal operations. This includes any attempted 2.679 - invocation of simplification procedures. 2.680 + \<^descr> @{attribute simp_debug} makes the Simplifier output some extra information 2.681 + about internal operations. This includes any attempted invocation of 2.682 + simplification procedures. 2.683 2.684 \<^descr> @{attribute simp_trace_new} controls Simplifier tracing within 2.685 Isabelle/PIDE applications, notably Isabelle/jEdit @{cite "isabelle-jedit"}. 2.686 - This provides a hierarchical representation of the rewriting steps 2.687 - performed by the Simplifier. 2.688 + This provides a hierarchical representation of the rewriting steps performed 2.689 + by the Simplifier. 2.690 2.691 Users can configure the behaviour by specifying breakpoints, verbosity and 2.692 enabling or disabling the interactive mode. In normal verbosity (the 2.693 - default), only rule applications matching a breakpoint will be shown to 2.694 - the user. In full verbosity, all rule applications will be logged. 2.695 - Interactive mode interrupts the normal flow of the Simplifier and defers 2.696 - the decision how to continue to the user via some GUI dialog. 2.697 + default), only rule applications matching a breakpoint will be shown to the 2.698 + user. In full verbosity, all rule applications will be logged. Interactive 2.699 + mode interrupts the normal flow of the Simplifier and defers the decision 2.700 + how to continue to the user via some GUI dialog. 2.701 2.702 \<^descr> @{attribute simp_break} declares term or theorem breakpoints for 2.703 @{attribute simp_trace_new} as described above. Term breakpoints are 2.704 @@ -800,21 +773,19 @@ 2.705 2.706 subsection \<open>Simplification procedures \label{sec:simproc}\<close> 2.707 2.708 -text \<open>Simplification procedures are ML functions that produce proven 2.709 - rewrite rules on demand. They are associated with higher-order 2.710 - patterns that approximate the left-hand sides of equations. The 2.711 - Simplifier first matches the current redex against one of the LHS 2.712 - patterns; if this succeeds, the corresponding ML function is 2.713 - invoked, passing the Simplifier context and redex term. Thus rules 2.714 - may be specifically fashioned for particular situations, resulting 2.715 - in a more powerful mechanism than term rewriting by a fixed set of 2.716 - rules. 2.717 +text \<open> 2.718 + Simplification procedures are ML functions that produce proven rewrite rules 2.719 + on demand. They are associated with higher-order patterns that approximate 2.720 + the left-hand sides of equations. The Simplifier first matches the current 2.721 + redex against one of the LHS patterns; if this succeeds, the corresponding 2.722 + ML function is invoked, passing the Simplifier context and redex term. Thus 2.723 + rules may be specifically fashioned for particular situations, resulting in 2.724 + a more powerful mechanism than term rewriting by a fixed set of rules. 2.725 2.726 - Any successful result needs to be a (possibly conditional) rewrite 2.727 - rule \<open>t \<equiv> u\<close> that is applicable to the current redex. The 2.728 - rule will be applied just as any ordinary rewrite rule. It is 2.729 - expected to be already in \<^emph>\<open>internal form\<close>, bypassing the 2.730 - automatic preprocessing of object-level equivalences. 2.731 + Any successful result needs to be a (possibly conditional) rewrite rule \<open>t \<equiv> 2.732 + u\<close> that is applicable to the current redex. The rule will be applied just as 2.733 + any ordinary rewrite rule. It is expected to be already in \<^emph>\<open>internal form\<close>, 2.734 + bypassing the automatic preprocessing of object-level equivalences. 2.735 2.736 \begin{matharray}{rcl} 2.737 @{command_def "simproc_setup"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\ 2.738 @@ -829,40 +800,39 @@ 2.739 @@{attribute simproc} (('add' ':')? | 'del' ':') (@{syntax name}+) 2.740 \<close>} 2.741 2.742 - \<^descr> @{command "simproc_setup"} defines a named simplification 2.743 - procedure that is invoked by the Simplifier whenever any of the 2.744 - given term patterns match the current redex. The implementation, 2.745 - which is provided as ML source text, needs to be of type @{ML_type 2.746 - "morphism -> simpset -> cterm -> thm option"}, where the @{ML_type 2.747 - cterm} represents the current redex \<open>r\<close> and the result is 2.748 - supposed to be some proven rewrite rule \<open>r \<equiv> r'\<close> (or a 2.749 - generalized version), or @{ML NONE} to indicate failure. The 2.750 - @{ML_type simpset} argument holds the full context of the current 2.751 - Simplifier invocation, including the actual Isar proof context. The 2.752 - @{ML_type morphism} informs about the difference of the original 2.753 - compilation context wrt.\ the one of the actual application later 2.754 - on. The optional @{keyword "identifier"} specifies theorems that 2.755 - represent the logical content of the abstract theory of this 2.756 - simproc. 2.757 + \<^descr> @{command "simproc_setup"} defines a named simplification procedure that 2.758 + is invoked by the Simplifier whenever any of the given term patterns match 2.759 + the current redex. The implementation, which is provided as ML source text, 2.760 + needs to be of type 2.761 + @{ML_type "morphism -> Proof.context -> cterm -> thm option"}, where the 2.762 + @{ML_type cterm} represents the current redex \<open>r\<close> and the result is supposed 2.763 + to be some proven rewrite rule \<open>r \<equiv> r'\<close> (or a generalized version), or @{ML 2.764 + NONE} to indicate failure. The @{ML_type Proof.context} argument holds the 2.765 + full context of the current Simplifier invocation. The @{ML_type morphism} 2.766 + informs about the difference of the original compilation context wrt.\ the 2.767 + one of the actual application later on. The optional @{keyword "identifier"} 2.768 + specifies theorems that represent the logical content of the abstract theory 2.769 + of this simproc. 2.770 2.771 - Morphisms and identifiers are only relevant for simprocs that are 2.772 - defined within a local target context, e.g.\ in a locale. 2.773 + Morphisms and identifiers are only relevant for simprocs that are defined 2.774 + within a local target context, e.g.\ in a locale. 2.775 2.776 - \<^descr> \<open>simproc add: name\<close> and \<open>simproc del: name\<close> 2.777 - add or delete named simprocs to the current Simplifier context. The 2.778 - default is to add a simproc. Note that @{command "simproc_setup"} 2.779 - already adds the new simproc to the subsequent context. 2.780 + \<^descr> \<open>simproc add: name\<close> and \<open>simproc del: name\<close> add or delete named simprocs 2.781 + to the current Simplifier context. The default is to add a simproc. Note 2.782 + that @{command "simproc_setup"} already adds the new simproc to the 2.783 + subsequent context. 2.784 \<close> 2.785 2.786 2.787 subsubsection \<open>Example\<close> 2.788 2.789 -text \<open>The following simplification procedure for @{thm 2.790 - [source=false, show_types] unit_eq} in HOL performs fine-grained 2.791 - control over rule application, beyond higher-order pattern matching. 2.792 - Declaring @{thm unit_eq} as @{attribute simp} directly would make 2.793 - the Simplifier loop! Note that a version of this simplification 2.794 - procedure is already active in Isabelle/HOL.\<close> 2.795 +text \<open> 2.796 + The following simplification procedure for @{thm [source = false, 2.797 + show_types] unit_eq} in HOL performs fine-grained control over rule 2.798 + application, beyond higher-order pattern matching. Declaring @{thm unit_eq} 2.799 + as @{attribute simp} directly would make the Simplifier loop! Note that a 2.800 + version of this simplification procedure is already active in Isabelle/HOL. 2.801 +\<close> 2.802 2.803 (*<*)experiment begin(*>*) 2.804 simproc_setup unit ("x::unit") = 2.805 @@ -871,20 +841,20 @@ 2.806 else SOME (mk_meta_eq @{thm unit_eq})\<close> 2.807 (*<*)end(*>*) 2.808 2.809 -text \<open>Since the Simplifier applies simplification procedures 2.810 - frequently, it is important to make the failure check in ML 2.811 - reasonably fast.\<close> 2.812 +text \<open> 2.813 + Since the Simplifier applies simplification procedures frequently, it is 2.814 + important to make the failure check in ML reasonably fast.\<close> 2.815 2.816 2.817 subsection \<open>Configurable Simplifier strategies \label{sec:simp-strategies}\<close> 2.818 2.819 -text \<open>The core term-rewriting engine of the Simplifier is normally 2.820 - used in combination with some add-on components that modify the 2.821 - strategy and allow to integrate other non-Simplifier proof tools. 2.822 - These may be reconfigured in ML as explained below. Even if the 2.823 - default strategies of object-logics like Isabelle/HOL are used 2.824 - unchanged, it helps to understand how the standard Simplifier 2.825 - strategies work.\<close> 2.826 +text \<open> 2.827 + The core term-rewriting engine of the Simplifier is normally used in 2.828 + combination with some add-on components that modify the strategy and allow 2.829 + to integrate other non-Simplifier proof tools. These may be reconfigured in 2.830 + ML as explained below. Even if the default strategies of object-logics like 2.831 + Isabelle/HOL are used unchanged, it helps to understand how the standard 2.832 + Simplifier strategies work.\<close> 2.833 2.834 2.835 subsubsection \<open>The subgoaler\<close> 2.836 @@ -897,21 +867,20 @@ 2.837 \end{mldecls} 2.838 2.839 The subgoaler is the tactic used to solve subgoals arising out of 2.840 - conditional rewrite rules or congruence rules. The default should 2.841 - be simplification itself. In rare situations, this strategy may 2.842 - need to be changed. For example, if the premise of a conditional 2.843 - rule is an instance of its conclusion, as in \<open>Suc ?m < ?n \<Longrightarrow> 2.844 - ?m < ?n\<close>, the default strategy could loop. % FIXME !?? 2.845 + conditional rewrite rules or congruence rules. The default should be 2.846 + simplification itself. In rare situations, this strategy may need to be 2.847 + changed. For example, if the premise of a conditional rule is an instance of 2.848 + its conclusion, as in \<open>Suc ?m < ?n \<Longrightarrow> ?m < ?n\<close>, the default strategy could 2.849 + loop. % FIXME !?? 2.850 2.851 - \<^descr> @{ML Simplifier.set_subgoaler}~\<open>tac ctxt\<close> sets the 2.852 - subgoaler of the context to \<open>tac\<close>. The tactic will 2.853 - be applied to the context of the running Simplifier instance. 2.854 + \<^descr> @{ML Simplifier.set_subgoaler}~\<open>tac ctxt\<close> sets the subgoaler of the 2.855 + context to \<open>tac\<close>. The tactic will be applied to the context of the running 2.856 + Simplifier instance. 2.857 2.858 - \<^descr> @{ML Simplifier.prems_of}~\<open>ctxt\<close> retrieves the current 2.859 - set of premises from the context. This may be non-empty only if 2.860 - the Simplifier has been told to utilize local assumptions in the 2.861 - first place (cf.\ the options in \secref{sec:simp-meth}). 2.862 - 2.863 + \<^descr> @{ML Simplifier.prems_of}~\<open>ctxt\<close> retrieves the current set of premises 2.864 + from the context. This may be non-empty only if the Simplifier has been 2.865 + told to utilize local assumptions in the first place (cf.\ the options in 2.866 + \secref{sec:simp-meth}). 2.867 2.868 As an example, consider the following alternative subgoaler: 2.869 \<close> 2.870 @@ -923,9 +892,9 @@ 2.871 asm_simp_tac ctxt 2.872 \<close> 2.873 2.874 -text \<open>This tactic first tries to solve the subgoal by assumption or 2.875 - by resolving with with one of the premises, calling simplification 2.876 - only if that fails.\<close> 2.877 +text \<open> 2.878 + This tactic first tries to solve the subgoal by assumption or by resolving 2.879 + with with one of the premises, calling simplification only if that fails.\<close> 2.880 2.881 2.882 subsubsection \<open>The solver\<close> 2.883 @@ -941,89 +910,76 @@ 2.884 @{index_ML_op addSSolver: "Proof.context * solver -> Proof.context"} \\ 2.885 \end{mldecls} 2.886 2.887 - A solver is a tactic that attempts to solve a subgoal after 2.888 - simplification. Its core functionality is to prove trivial subgoals 2.889 - such as @{prop "True"} and \<open>t = t\<close>, but object-logics might 2.890 - be more ambitious. For example, Isabelle/HOL performs a restricted 2.891 - version of linear arithmetic here. 2.892 + A solver is a tactic that attempts to solve a subgoal after simplification. 2.893 + Its core functionality is to prove trivial subgoals such as @{prop "True"} 2.894 + and \<open>t = t\<close>, but object-logics might be more ambitious. For example, 2.895 + Isabelle/HOL performs a restricted version of linear arithmetic here. 2.896 2.897 - Solvers are packaged up in abstract type @{ML_type solver}, with 2.898 - @{ML Simplifier.mk_solver} as the only operation to create a solver. 2.899 + Solvers are packaged up in abstract type @{ML_type solver}, with @{ML 2.900 + Simplifier.mk_solver} as the only operation to create a solver. 2.901 2.902 \<^medskip> 2.903 - Rewriting does not instantiate unknowns. For example, 2.904 - rewriting alone cannot prove \<open>a \<in> ?A\<close> since this requires 2.905 - instantiating \<open>?A\<close>. The solver, however, is an arbitrary 2.906 - tactic and may instantiate unknowns as it pleases. This is the only 2.907 - way the Simplifier can handle a conditional rewrite rule whose 2.908 - condition contains extra variables. When a simplification tactic is 2.909 - to be combined with other provers, especially with the Classical 2.910 - Reasoner, it is important whether it can be considered safe or not. 2.911 - For this reason a simpset contains two solvers: safe and unsafe. 2.912 + Rewriting does not instantiate unknowns. For example, rewriting alone cannot 2.913 + prove \<open>a \<in> ?A\<close> since this requires instantiating \<open>?A\<close>. The solver, however, 2.914 + is an arbitrary tactic and may instantiate unknowns as it pleases. This is 2.915 + the only way the Simplifier can handle a conditional rewrite rule whose 2.916 + condition contains extra variables. When a simplification tactic is to be 2.917 + combined with other provers, especially with the Classical Reasoner, it is 2.918 + important whether it can be considered safe or not. For this reason a 2.919 + simpset contains two solvers: safe and unsafe. 2.920 2.921 - The standard simplification strategy solely uses the unsafe solver, 2.922 - which is appropriate in most cases. For special applications where 2.923 - the simplification process is not allowed to instantiate unknowns 2.924 - within the goal, simplification starts with the safe solver, but may 2.925 - still apply the ordinary unsafe one in nested simplifications for 2.926 - conditional rules or congruences. Note that in this way the overall 2.927 - tactic is not totally safe: it may instantiate unknowns that appear 2.928 - also in other subgoals. 2.929 - 2.930 - \<^descr> @{ML Simplifier.mk_solver}~\<open>name tac\<close> turns \<open>tac\<close> into a solver; the \<open>name\<close> is only attached as a 2.931 - comment and has no further significance. 2.932 + The standard simplification strategy solely uses the unsafe solver, which is 2.933 + appropriate in most cases. For special applications where the simplification 2.934 + process is not allowed to instantiate unknowns within the goal, 2.935 + simplification starts with the safe solver, but may still apply the ordinary 2.936 + unsafe one in nested simplifications for conditional rules or congruences. 2.937 + Note that in this way the overall tactic is not totally safe: it may 2.938 + instantiate unknowns that appear also in other subgoals. 2.939 2.940 - \<^descr> \<open>ctxt setSSolver solver\<close> installs \<open>solver\<close> as 2.941 - the safe solver of \<open>ctxt\<close>. 2.942 + \<^descr> @{ML Simplifier.mk_solver}~\<open>name tac\<close> turns \<open>tac\<close> into a solver; the 2.943 + \<open>name\<close> is only attached as a comment and has no further significance. 2.944 2.945 - \<^descr> \<open>ctxt addSSolver solver\<close> adds \<open>solver\<close> as an 2.946 - additional safe solver; it will be tried after the solvers which had 2.947 - already been present in \<open>ctxt\<close>. 2.948 + \<^descr> \<open>ctxt setSSolver solver\<close> installs \<open>solver\<close> as the safe solver of \<open>ctxt\<close>. 2.949 2.950 - \<^descr> \<open>ctxt setSolver solver\<close> installs \<open>solver\<close> as the 2.951 - unsafe solver of \<open>ctxt\<close>. 2.952 + \<^descr> \<open>ctxt addSSolver solver\<close> adds \<open>solver\<close> as an additional safe solver; it 2.953 + will be tried after the solvers which had already been present in \<open>ctxt\<close>. 2.954 2.955 - \<^descr> \<open>ctxt addSolver solver\<close> adds \<open>solver\<close> as an 2.956 - additional unsafe solver; it will be tried after the solvers which 2.957 - had already been present in \<open>ctxt\<close>. 2.958 + \<^descr> \<open>ctxt setSolver solver\<close> installs \<open>solver\<close> as the unsafe solver of \<open>ctxt\<close>. 2.959 + 2.960 + \<^descr> \<open>ctxt addSolver solver\<close> adds \<open>solver\<close> as an additional unsafe solver; it 2.961 + will be tried after the solvers which had already been present in \<open>ctxt\<close>. 2.962 2.963 2.964 \<^medskip> 2.965 - The solver tactic is invoked with the context of the 2.966 - running Simplifier. Further operations 2.967 - may be used to retrieve relevant information, such as the list of 2.968 - local Simplifier premises via @{ML Simplifier.prems_of} --- this 2.969 - list may be non-empty only if the Simplifier runs in a mode that 2.970 - utilizes local assumptions (see also \secref{sec:simp-meth}). The 2.971 - solver is also presented the full goal including its assumptions in 2.972 - any case. Thus it can use these (e.g.\ by calling @{ML 2.973 - assume_tac}), even if the Simplifier proper happens to ignore local 2.974 - premises at the moment. 2.975 + The solver tactic is invoked with the context of the running Simplifier. 2.976 + Further operations may be used to retrieve relevant information, such as the 2.977 + list of local Simplifier premises via @{ML Simplifier.prems_of} --- this 2.978 + list may be non-empty only if the Simplifier runs in a mode that utilizes 2.979 + local assumptions (see also \secref{sec:simp-meth}). The solver is also 2.980 + presented the full goal including its assumptions in any case. Thus it can 2.981 + use these (e.g.\ by calling @{ML assume_tac}), even if the Simplifier proper 2.982 + happens to ignore local premises at the moment. 2.983 2.984 \<^medskip> 2.985 - As explained before, the subgoaler is also used to solve 2.986 - the premises of congruence rules. These are usually of the form 2.987 - \<open>s = ?x\<close>, where \<open>s\<close> needs to be simplified and 2.988 - \<open>?x\<close> needs to be instantiated with the result. Typically, 2.989 + As explained before, the subgoaler is also used to solve the premises of 2.990 + congruence rules. These are usually of the form \<open>s = ?x\<close>, where \<open>s\<close> needs to 2.991 + be simplified and \<open>?x\<close> needs to be instantiated with the result. Typically, 2.992 the subgoaler will invoke the Simplifier at some point, which will 2.993 - eventually call the solver. For this reason, solver tactics must be 2.994 - prepared to solve goals of the form \<open>t = ?x\<close>, usually by 2.995 - reflexivity. In particular, reflexivity should be tried before any 2.996 - of the fancy automated proof tools. 2.997 + eventually call the solver. For this reason, solver tactics must be prepared 2.998 + to solve goals of the form \<open>t = ?x\<close>, usually by reflexivity. In particular, 2.999 + reflexivity should be tried before any of the fancy automated proof tools. 2.1000 2.1001 - It may even happen that due to simplification the subgoal is no 2.1002 - longer an equality. For example, \<open>False \<longleftrightarrow> ?Q\<close> could be 2.1003 - rewritten to \<open>\<not> ?Q\<close>. To cover this case, the solver could 2.1004 - try resolving with the theorem \<open>\<not> False\<close> of the 2.1005 + It may even happen that due to simplification the subgoal is no longer an 2.1006 + equality. For example, \<open>False \<longleftrightarrow> ?Q\<close> could be rewritten to \<open>\<not> ?Q\<close>. To cover 2.1007 + this case, the solver could try resolving with the theorem \<open>\<not> False\<close> of the 2.1008 object-logic. 2.1009 2.1010 \<^medskip> 2.1011 \begin{warn} 2.1012 - If a premise of a congruence rule cannot be proved, then the 2.1013 - congruence is ignored. This should only happen if the rule is 2.1014 - \<^emph>\<open>conditional\<close> --- that is, contains premises not of the form 2.1015 - \<open>t = ?x\<close>. Otherwise it indicates that some congruence rule, 2.1016 - or possibly the subgoaler or solver, is faulty. 2.1017 + If a premise of a congruence rule cannot be proved, then the congruence is 2.1018 + ignored. This should only happen if the rule is \<^emph>\<open>conditional\<close> --- that is, 2.1019 + contains premises not of the form \<open>t = ?x\<close>. Otherwise it indicates that some 2.1020 + congruence rule, or possibly the subgoaler or solver, is faulty. 2.1021 \end{warn} 2.1022 \<close> 2.1023 2.1024 @@ -1042,69 +998,62 @@ 2.1025 @{index_ML Splitter.del_split: "thm -> Proof.context -> Proof.context"} \\ 2.1026 \end{mldecls} 2.1027 2.1028 - The looper is a list of tactics that are applied after 2.1029 - simplification, in case the solver failed to solve the simplified 2.1030 - goal. If the looper succeeds, the simplification process is started 2.1031 - all over again. Each of the subgoals generated by the looper is 2.1032 - attacked in turn, in reverse order. 2.1033 + The looper is a list of tactics that are applied after simplification, in 2.1034 + case the solver failed to solve the simplified goal. If the looper succeeds, 2.1035 + the simplification process is started all over again. Each of the subgoals 2.1036 + generated by the looper is attacked in turn, in reverse order. 2.1037 2.1038 - A typical looper is \<^emph>\<open>case splitting\<close>: the expansion of a 2.1039 - conditional. Another possibility is to apply an elimination rule on 2.1040 - the assumptions. More adventurous loopers could start an induction. 2.1041 + A typical looper is \<^emph>\<open>case splitting\<close>: the expansion of a conditional. 2.1042 + Another possibility is to apply an elimination rule on the assumptions. More 2.1043 + adventurous loopers could start an induction. 2.1044 2.1045 - \<^descr> \<open>ctxt setloop tac\<close> installs \<open>tac\<close> as the only 2.1046 - looper tactic of \<open>ctxt\<close>. 2.1047 + \<^descr> \<open>ctxt setloop tac\<close> installs \<open>tac\<close> as the only looper tactic of \<open>ctxt\<close>. 2.1048 2.1049 - \<^descr> \<open>ctxt addloop (name, tac)\<close> adds \<open>tac\<close> as an 2.1050 - additional looper tactic with name \<open>name\<close>, which is 2.1051 - significant for managing the collection of loopers. The tactic will 2.1052 - be tried after the looper tactics that had already been present in 2.1053 - \<open>ctxt\<close>. 2.1054 + \<^descr> \<open>ctxt addloop (name, tac)\<close> adds \<open>tac\<close> as an additional looper tactic 2.1055 + with name \<open>name\<close>, which is significant for managing the collection of 2.1056 + loopers. The tactic will be tried after the looper tactics that had 2.1057 + already been present in \<open>ctxt\<close>. 2.1058 2.1059 - \<^descr> \<open>ctxt delloop name\<close> deletes the looper tactic that was 2.1060 - associated with \<open>name\<close> from \<open>ctxt\<close>. 2.1061 + \<^descr> \<open>ctxt delloop name\<close> deletes the looper tactic that was associated with 2.1062 + \<open>name\<close> from \<open>ctxt\<close>. 2.1063 2.1064 - \<^descr> @{ML Splitter.add_split}~\<open>thm ctxt\<close> adds split tactics 2.1065 - for \<open>thm\<close> as additional looper tactics of \<open>ctxt\<close>. 2.1066 + \<^descr> @{ML Splitter.add_split}~\<open>thm ctxt\<close> adds split tactics for \<open>thm\<close> as 2.1067 + additional looper tactics of \<open>ctxt\<close>. 2.1068 2.1069 - \<^descr> @{ML Splitter.del_split}~\<open>thm ctxt\<close> deletes the split 2.1070 - tactic corresponding to \<open>thm\<close> from the looper tactics of 2.1071 - \<open>ctxt\<close>. 2.1072 + \<^descr> @{ML Splitter.del_split}~\<open>thm ctxt\<close> deletes the split tactic 2.1073 + corresponding to \<open>thm\<close> from the looper tactics of \<open>ctxt\<close>. 2.1074 2.1075 - 2.1076 - The splitter replaces applications of a given function; the 2.1077 - right-hand side of the replacement can be anything. For example, 2.1078 - here is a splitting rule for conditional expressions: 2.1079 + The splitter replaces applications of a given function; the right-hand side 2.1080 + of the replacement can be anything. For example, here is a splitting rule 2.1081 + for conditional expressions: 2.1082 2.1083 @{text [display] "?P (if ?Q ?x ?y) \<longleftrightarrow> (?Q \<longrightarrow> ?P ?x) \<and> (\<not> ?Q \<longrightarrow> ?P ?y)"} 2.1084 2.1085 - Another example is the elimination operator for Cartesian products 2.1086 - (which happens to be called @{const case_prod} in Isabelle/HOL: 2.1087 + Another example is the elimination operator for Cartesian products (which 2.1088 + happens to be called @{const case_prod} in Isabelle/HOL: 2.1089 2.1090 @{text [display] "?P (case_prod ?f ?p) \<longleftrightarrow> (\<forall>a b. ?p = (a, b) \<longrightarrow> ?P (f a b))"} 2.1091 2.1092 - For technical reasons, there is a distinction between case splitting 2.1093 - in the conclusion and in the premises of a subgoal. The former is 2.1094 - done by @{ML Splitter.split_tac} with rules like @{thm [source] 2.1095 - if_split} or @{thm [source] option.split}, which do not split the 2.1096 - subgoal, while the latter is done by @{ML Splitter.split_asm_tac} 2.1097 - with rules like @{thm [source] if_split_asm} or @{thm [source] 2.1098 - option.split_asm}, which split the subgoal. The function @{ML 2.1099 - Splitter.add_split} automatically takes care of which tactic to 2.1100 - call, analyzing the form of the rules given as argument; it is the 2.1101 - same operation behind \<open>split\<close> attribute or method modifier 2.1102 - syntax in the Isar source language. 2.1103 + For technical reasons, there is a distinction between case splitting in the 2.1104 + conclusion and in the premises of a subgoal. The former is done by @{ML 2.1105 + Splitter.split_tac} with rules like @{thm [source] if_split} or @{thm 2.1106 + [source] option.split}, which do not split the subgoal, while the latter is 2.1107 + done by @{ML Splitter.split_asm_tac} with rules like @{thm [source] 2.1108 + if_split_asm} or @{thm [source] option.split_asm}, which split the subgoal. 2.1109 + The function @{ML Splitter.add_split} automatically takes care of which 2.1110 + tactic to call, analyzing the form of the rules given as argument; it is the 2.1111 + same operation behind \<open>split\<close> attribute or method modifier syntax in the 2.1112 + Isar source language. 2.1113 2.1114 - Case splits should be allowed only when necessary; they are 2.1115 - expensive and hard to control. Case-splitting on if-expressions in 2.1116 - the conclusion is usually beneficial, so it is enabled by default in 2.1117 - Isabelle/HOL and Isabelle/FOL/ZF. 2.1118 + Case splits should be allowed only when necessary; they are expensive and 2.1119 + hard to control. Case-splitting on if-expressions in the conclusion is 2.1120 + usually beneficial, so it is enabled by default in Isabelle/HOL and 2.1121 + Isabelle/FOL/ZF. 2.1122 2.1123 \begin{warn} 2.1124 - With @{ML Splitter.split_asm_tac} as looper component, the 2.1125 - Simplifier may split subgoals! This might cause unexpected problems 2.1126 - in tactic expressions that silently assume 0 or 1 subgoals after 2.1127 - simplification. 2.1128 + With @{ML Splitter.split_asm_tac} as looper component, the Simplifier may 2.1129 + split subgoals! This might cause unexpected problems in tactic expressions 2.1130 + that silently assume 0 or 1 subgoals after simplification. 2.1131 \end{warn} 2.1132 \<close> 2.1133 2.1134 @@ -1123,18 +1072,17 @@ 2.1135 opt: '(' ('no_asm' | 'no_asm_simp' | 'no_asm_use') ')' 2.1136 \<close>} 2.1137 2.1138 - \<^descr> @{attribute simplified}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> causes a theorem to 2.1139 - be simplified, either by exactly the specified rules \<open>a\<^sub>1, \<dots>, 2.1140 - a\<^sub>n\<close>, or the implicit Simplifier context if no arguments are given. 2.1141 - The result is fully simplified by default, including assumptions and 2.1142 - conclusion; the options \<open>no_asm\<close> etc.\ tune the Simplifier in 2.1143 - the same way as the for the \<open>simp\<close> method. 2.1144 + \<^descr> @{attribute simplified}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> causes a theorem to be simplified, 2.1145 + either by exactly the specified rules \<open>a\<^sub>1, \<dots>, a\<^sub>n\<close>, or the implicit 2.1146 + Simplifier context if no arguments are given. The result is fully simplified 2.1147 + by default, including assumptions and conclusion; the options \<open>no_asm\<close> etc.\ 2.1148 + tune the Simplifier in the same way as the for the \<open>simp\<close> method. 2.1149 2.1150 - Note that forward simplification restricts the Simplifier to its 2.1151 - most basic operation of term rewriting; solver and looper tactics 2.1152 - (\secref{sec:simp-strategies}) are \<^emph>\<open>not\<close> involved here. The 2.1153 - @{attribute simplified} attribute should be only rarely required 2.1154 - under normal circumstances. 2.1155 + Note that forward simplification restricts the Simplifier to its most basic 2.1156 + operation of term rewriting; solver and looper tactics 2.1157 + (\secref{sec:simp-strategies}) are \<^emph>\<open>not\<close> involved here. The @{attribute 2.1158 + simplified} attribute should be only rarely required under normal 2.1159 + circumstances. 2.1160 \<close> 2.1161 2.1162

3.1 --- a/src/Pure/Concurrent/future.ML Fri Mar 18 12:54:20 2016 +0100 3.2 +++ b/src/Pure/Concurrent/future.ML Fri Mar 18 20:29:50 2016 +0100 3.3 @@ -99,6 +99,13 @@ 3.4 fun peek x = Single_Assignment.peek (result_of x); 3.5 fun is_finished x = is_some (peek x); 3.6 3.7 +val _ = 3.8 + PolyML.addPrettyPrinter (fn depth => fn pretty => fn x => 3.9 + (case peek x of 3.10 + NONE => PolyML.PrettyString "<future>" 3.11 + | SOME (Exn.Exn _) => PolyML.PrettyString "<failed>" 3.12 + | SOME (Exn.Res y) => pretty (y, depth))); 3.13 + 3.14 3.15 3.16 (** scheduling **) 3.17 @@ -665,4 +672,3 @@ 3.18 end; 3.19 3.20 type 'a future = 'a Future.future; 3.21 -

4.1 --- a/src/Pure/Concurrent/lazy.ML Fri Mar 18 12:54:20 2016 +0100 4.2 +++ b/src/Pure/Concurrent/lazy.ML Fri Mar 18 20:29:50 2016 +0100 4.3 @@ -99,7 +99,16 @@ 4.4 if is_finished x then Future.value_result (force_result x) 4.5 else (singleton o Future.forks) params (fn () => force x); 4.6 4.7 + 4.8 +(* toplevel pretty printing *) 4.9 + 4.10 +val _ = 4.11 + PolyML.addPrettyPrinter (fn depth => fn pretty => fn x => 4.12 + (case peek x of 4.13 + NONE => PolyML.PrettyString "<lazy>" 4.14 + | SOME (Exn.Exn _) => PolyML.PrettyString "<failed>" 4.15 + | SOME (Exn.Res y) => pretty (y, depth))); 4.16 + 4.17 end; 4.18 4.19 type 'a lazy = 'a Lazy.lazy; 4.20 -

5.1 --- a/src/Pure/Concurrent/standard_thread.scala Fri Mar 18 12:54:20 2016 +0100 5.2 +++ b/src/Pure/Concurrent/standard_thread.scala Fri Mar 18 20:29:50 2016 +0100 5.3 @@ -11,8 +11,6 @@ 5.4 import java.lang.Thread 5.5 import java.util.concurrent.{ThreadPoolExecutor, TimeUnit, LinkedBlockingQueue, ThreadFactory} 5.6 5.7 -import scala.concurrent.{ExecutionContext, ExecutionContextExecutor} 5.8 - 5.9 5.10 object Standard_Thread 5.11 { 5.12 @@ -50,9 +48,6 @@ 5.13 executor 5.14 } 5.15 5.16 - lazy val execution_context: ExecutionContextExecutor = 5.17 - ExecutionContext.fromExecutorService(pool) 5.18 - 5.19 5.20 /* delayed events */ 5.21

6.1 --- a/src/Pure/Concurrent/synchronized.ML Fri Mar 18 12:54:20 2016 +0100 6.2 +++ b/src/Pure/Concurrent/synchronized.ML Fri Mar 18 20:29:50 2016 +0100 6.3 @@ -66,4 +66,9 @@ 6.4 6.5 end; 6.6 6.7 + 6.8 +(* toplevel pretty printing *) 6.9 + 6.10 +val _ = PolyML.addPrettyPrinter (fn depth => fn pretty => fn var => pretty (value var, depth)); 6.11 + 6.12 end;

7.1 --- a/src/Pure/Concurrent/task_queue.ML Fri Mar 18 12:54:20 2016 +0100 7.2 +++ b/src/Pure/Concurrent/task_queue.ML Fri Mar 18 20:29:50 2016 +0100 7.3 @@ -394,4 +394,10 @@ 7.4 | NONE => ((NONE, deps'), queue))) 7.5 end; 7.6 7.7 + 7.8 +(* toplevel pretty printing *) 7.9 + 7.10 +val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o Pretty.str o str_of_task); 7.11 +val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o Pretty.str o str_of_group); 7.12 + 7.13 end;

8.1 --- a/src/Pure/General/binding.ML Fri Mar 18 12:54:20 2016 +0100 8.2 +++ b/src/Pure/General/binding.ML Fri Mar 18 20:29:50 2016 +0100 8.3 @@ -35,7 +35,6 @@ 8.4 val concealed: binding -> binding 8.5 val pretty: binding -> Pretty.T 8.6 val print: binding -> string 8.7 - val pp: binding -> Pretty.T 8.8 val bad: binding -> string 8.9 val check: binding -> unit 8.10 val name_spec: scope list -> (string * bool) list -> binding -> 8.11 @@ -155,7 +154,7 @@ 8.12 8.13 val print = Pretty.unformatted_string_of o pretty; 8.14 8.15 -val pp = pretty o set_pos Position.none; 8.16 +val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o pretty o set_pos Position.none); 8.17 8.18 8.19 (* check *) 8.20 @@ -191,6 +190,7 @@ 8.21 andalso error (bad binding); 8.22 in {restriction = restriction, concealed = concealed, spec = if null spec2 then [] else spec} end; 8.23 8.24 + 8.25 end; 8.26 8.27 type binding = Binding.binding;

9.1 --- a/src/Pure/General/path.ML Fri Mar 18 12:54:20 2016 +0100 9.2 +++ b/src/Pure/General/path.ML Fri Mar 18 20:29:50 2016 +0100 9.3 @@ -173,6 +173,8 @@ 9.4 9.5 val print = Pretty.unformatted_string_of o pretty; 9.6 9.7 +val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o pretty); 9.8 + 9.9 9.10 (* base element *) 9.11

10.1 --- a/src/Pure/General/pretty.ML Fri Mar 18 12:54:20 2016 +0100 10.2 +++ b/src/Pure/General/pretty.ML Fri Mar 18 20:29:50 2016 +0100 10.3 @@ -72,8 +72,10 @@ 10.4 val block_enclose: T * T -> T list -> T 10.5 val writeln_chunks: T list -> unit 10.6 val writeln_chunks2: T list -> unit 10.7 - val to_ML: T -> ML_Pretty.pretty 10.8 + val to_ML: int -> T -> ML_Pretty.pretty 10.9 val from_ML: ML_Pretty.pretty -> T 10.10 + val to_polyml: T -> PolyML.pretty 10.11 + val from_polyml: PolyML.pretty -> T 10.12 end; 10.13 10.14 structure Pretty: PRETTY = 10.15 @@ -372,13 +374,14 @@ 10.16 10.17 10.18 10.19 -(** ML toplevel pretty printing **) 10.20 +(** toplevel pretty printing **) 10.21 10.22 -fun to_ML (Block (m, consistent, indent, prts, _)) = 10.23 - ML_Pretty.Block (m, consistent, FixedInt.fromInt indent, map to_ML prts) 10.24 - | to_ML (Break (force, wd, ind)) = 10.25 +fun to_ML 0 (Block _) = ML_Pretty.str "..." 10.26 + | to_ML depth (Block (m, consistent, indent, prts, _)) = 10.27 + ML_Pretty.Block (m, consistent, FixedInt.fromInt indent, map (to_ML (depth - 1)) prts) 10.28 + | to_ML _ (Break (force, wd, ind)) = 10.29 ML_Pretty.Break (force, FixedInt.fromInt wd, FixedInt.fromInt ind) 10.30 - | to_ML (Str (s, len)) = ML_Pretty.String (s, FixedInt.fromInt len); 10.31 + | to_ML _ (Str (s, len)) = ML_Pretty.String (s, FixedInt.fromInt len); 10.32 10.33 fun from_ML (ML_Pretty.Block (m, consistent, indent, prts)) = 10.34 make_block m consistent (FixedInt.toInt indent) (map from_ML prts) 10.35 @@ -386,6 +389,14 @@ 10.36 Break (force, FixedInt.toInt wd, FixedInt.toInt ind) 10.37 | from_ML (ML_Pretty.String (s, len)) = Str (s, FixedInt.toInt len); 10.38 10.39 +val to_polyml = to_ML ~1 #> ML_Pretty.to_polyml; 10.40 +val from_polyml = ML_Pretty.from_polyml #> from_ML; 10.41 + 10.42 end; 10.43 10.44 +val _ = 10.45 + PolyML.addPrettyPrinter (fn depth => fn _ => ML_Pretty.to_polyml o to_ML (depth * 2 + 1) o quote); 10.46 + 10.47 +val _ = PolyML.addPrettyPrinter (fn _ => fn _ => to_polyml o position); 10.48 + 10.49 end;

11.1 --- a/src/Pure/General/secure.ML Fri Mar 18 12:54:20 2016 +0100 11.2 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 11.3 @@ -1,27 +0,0 @@ 11.4 -(* Title: Pure/General/secure.ML 11.5 - Author: Makarius 11.6 - 11.7 -Secure critical operations. 11.8 -*) 11.9 - 11.10 -signature SECURE = 11.11 -sig 11.12 - val set_secure: unit -> unit 11.13 - val is_secure: unit -> bool 11.14 - val deny: string -> unit 11.15 - val deny_ml: unit -> unit 11.16 -end; 11.17 - 11.18 -structure Secure: SECURE = 11.19 -struct 11.20 - 11.21 -val secure = Unsynchronized.ref false; 11.22 - 11.23 -fun set_secure () = secure := true; 11.24 -fun is_secure () = ! secure; 11.25 - 11.26 -fun deny msg = if is_secure () then error msg else (); 11.27 - 11.28 -fun deny_ml () = deny "Cannot evaluate ML source in secure mode"; 11.29 - 11.30 -end;

12.1 --- a/src/Pure/General/sha1.ML Fri Mar 18 12:54:20 2016 +0100 12.2 +++ b/src/Pure/General/sha1.ML Fri Mar 18 20:29:50 2016 +0100 12.3 @@ -1,8 +1,8 @@ 12.4 (* Title: Pure/General/sha1.ML 12.5 Author: Makarius 12.6 + Author: Sascha Boehme, TU Muenchen 12.7 12.8 -Digesting strings according to SHA-1 (see RFC 3174) -- relatively slow 12.9 -version in pure ML. 12.10 +Digesting strings according to SHA-1 (see RFC 3174). 12.11 *) 12.12 12.13 signature SHA1 = 12.14 @@ -11,11 +11,16 @@ 12.15 val digest: string -> digest 12.16 val rep: digest -> string 12.17 val fake: string -> digest 12.18 + val test_samples: unit -> unit 12.19 end; 12.20 12.21 structure SHA1: SHA1 = 12.22 struct 12.23 12.24 +(** internal implementation in ML -- relatively slow **) 12.25 + 12.26 +local 12.27 + 12.28 (* 32bit words *) 12.29 12.30 infix 4 << >>; 12.31 @@ -45,7 +50,7 @@ 12.32 12.33 (* padding *) 12.34 12.35 -fun pack_bytes 0 n = "" 12.36 +fun pack_bytes 0 _ = "" 12.37 | pack_bytes k n = pack_bytes (k - 1) (n div 256) ^ chr (n mod 256); 12.38 12.39 fun padded_text str = 12.40 @@ -61,7 +66,7 @@ 12.41 in ((len + size padding) div 4, word) end; 12.42 12.43 12.44 -(* digest_string *) 12.45 +(* digest_string_internal *) 12.46 12.47 fun digest_word (i, w, {a, b, c, d, e}) = 12.48 let 12.49 @@ -87,7 +92,9 @@ 12.50 e = d} 12.51 end; 12.52 12.53 -fun digest_string str = 12.54 +in 12.55 + 12.56 +fun digest_string_internal str = 12.57 let 12.58 val (text_len, text) = padded_text str; 12.59 12.60 @@ -129,14 +136,89 @@ 12.61 val hex = hex_word o hash; 12.62 in hex 0 ^ hex 1 ^ hex 2 ^ hex 3 ^ hex 4 end; 12.63 12.64 +end; 12.65 12.66 -(* type digest *) 12.67 + 12.68 +(** external implementation in C **) 12.69 + 12.70 +local 12.71 + 12.72 +(* digesting *) 12.73 + 12.74 +fun hex_digit i = if i < 10 then chr (ord "0" + i) else chr (ord "a" + i - 10); 12.75 + 12.76 +fun hex_string arr i = 12.77 + let val c = CInterface.fromCchar (CInterface.offset i CInterface.Cchar arr) 12.78 + in (op ^) (apply2 hex_digit (Integer.div_mod (Char.ord c) 16)) end 12.79 + 12.80 +val lib_path = 12.81 + ("$ML_HOME/" ^ (if ML_System.platform_is_windows then "sha1.dll" else "libsha1.so")) 12.82 + |> Path.explode; 12.83 + 12.84 +val STRING_INPUT_BYTES = 12.85 + CInterface.mkConversion undefined (CInterface.toCbytes o Byte.stringToBytes) 12.86 + (CInterface.Cpointer CInterface.Cchar); 12.87 + 12.88 +in 12.89 + 12.90 +fun digest_string_external str = 12.91 + let 12.92 + val digest = CInterface.alloc 20 CInterface.Cchar; 12.93 + val _ = 12.94 + CInterface.call3 12.95 + (CInterface.get_sym (File.platform_path lib_path) "sha1_buffer") 12.96 + (STRING_INPUT_BYTES, CInterface.LONG, CInterface.POINTER) 12.97 + CInterface.POINTER (str, size str, CInterface.address digest); 12.98 + in fold (suffix o hex_string digest) (0 upto 19) "" end; 12.99 + 12.100 +end; 12.101 + 12.102 + 12.103 + 12.104 +(** type digest **) 12.105 12.106 datatype digest = Digest of string; 12.107 12.108 -val digest = Digest o digest_string; 12.109 fun rep (Digest s) = s; 12.110 - 12.111 val fake = Digest; 12.112 12.113 +val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o Pretty.str o quote o rep); 12.114 + 12.115 +fun digest_string str = digest_string_external str 12.116 + handle CInterface.Foreign msg => 12.117 + (warning (msg ^ "\nUsing slow ML implementation of SHA1.digest"); digest_string_internal str); 12.118 + 12.119 +val digest = Digest o digest_string; 12.120 + 12.121 + 12.122 + 12.123 +(** SHA1 samples found in the wild **) 12.124 + 12.125 +local 12.126 + 12.127 +fun check (msg, key) = 12.128 + let val key' = rep (digest msg) in 12.129 + if key = key' then () 12.130 + else 12.131 + raise Fail ("SHA1 library integrity test failed on " ^ quote msg ^ ":\n" ^ 12.132 + key ^ " expected, but\n" ^ key' ^ " was found") 12.133 + end; 12.134 + 12.135 +in 12.136 + 12.137 +fun test_samples () = 12.138 + List.app check 12.139 + [("", "da39a3ee5e6b4b0d3255bfef95601890afd80709"), 12.140 + ("a", "86f7e437faa5a7fce15d1ddcb9eaeaea377667b8"), 12.141 + ("abc", "a9993e364706816aba3e25717850c26c9cd0d89d"), 12.142 + ("abcdefghijklmnopqrstuvwxyz", "32d10c7b8cf96570ca04ce37f2a19d84240d3a89"), 12.143 + ("The quick brown fox jumps over the lazy dog", "2fd4e1c67a2d28fced849ee1bb76e7391b93eb12"), 12.144 + (replicate_string 100 "\000", "ed4a77d1b56a118938788fc53037759b6c501e3d"), 12.145 + ("a\000b", "4a3dec2d1f8245280855c42db0ee4239f917fdb8"), 12.146 + ("\000\001", "3f29546453678b855931c174a97d6c0894b8f546")]; 12.147 + 12.148 end; 12.149 + 12.150 +val _ = test_samples (); 12.151 + 12.152 +end;

13.1 --- a/src/Pure/General/sha1_polyml.ML Fri Mar 18 12:54:20 2016 +0100 13.2 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 13.3 @@ -1,52 +0,0 @@ 13.4 -(* Title: Pure/General/sha1_polyml.ML 13.5 - Author: Sascha Boehme, TU Muenchen 13.6 - 13.7 -Digesting strings according to SHA-1 (see RFC 3174) -- based on an 13.8 -external implementation in C with a fallback to an internal 13.9 -implementation. 13.10 -*) 13.11 - 13.12 -structure SHA1: SHA1 = 13.13 -struct 13.14 - 13.15 -(* digesting *) 13.16 - 13.17 -fun hex_digit i = if i < 10 then chr (ord "0" + i) else chr (ord "a" + i - 10); 13.18 - 13.19 -fun hex_string arr i = 13.20 - let val c = CInterface.fromCchar (CInterface.offset i CInterface.Cchar arr) 13.21 - in (op ^) (apply2 hex_digit (Integer.div_mod (Char.ord c) 16)) end 13.22 - 13.23 -val lib_path = 13.24 - ("$ML_HOME/" ^ (if ML_System.platform_is_windows then "sha1.dll" else "libsha1.so")) 13.25 - |> Path.explode; 13.26 - 13.27 -val STRING_INPUT_BYTES = 13.28 - CInterface.mkConversion undefined (CInterface.toCbytes o Byte.stringToBytes) 13.29 - (CInterface.Cpointer CInterface.Cchar); 13.30 - 13.31 -fun digest_external str = 13.32 - let 13.33 - val digest = CInterface.alloc 20 CInterface.Cchar; 13.34 - val _ = 13.35 - CInterface.call3 13.36 - (CInterface.get_sym (File.platform_path lib_path) "sha1_buffer") 13.37 - (STRING_INPUT_BYTES, CInterface.LONG, CInterface.POINTER) 13.38 - CInterface.POINTER (str, size str, CInterface.address digest); 13.39 - in fold (suffix o hex_string digest) (0 upto 19) "" end; 13.40 - 13.41 -fun digest_string str = digest_external str 13.42 - handle CInterface.Foreign msg => 13.43 - (warning (msg ^ "\nUsing slow ML implementation of SHA1.digest"); SHA1.rep (SHA1.digest str)); 13.44 - 13.45 - 13.46 -(* type digest *) 13.47 - 13.48 -datatype digest = Digest of string; 13.49 - 13.50 -val digest = Digest o digest_string; 13.51 -fun rep (Digest s) = s; 13.52 - 13.53 -val fake = Digest; 13.54 - 13.55 -end;

14.1 --- a/src/Pure/General/sha1_samples.ML Fri Mar 18 12:54:20 2016 +0100 14.2 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 14.3 @@ -1,36 +0,0 @@ 14.4 -(* Title: Pure/General/sha1_samples.ML 14.5 - Author: Makarius 14.6 - 14.7 -Some SHA1 samples found in the wild. 14.8 -*) 14.9 - 14.10 -signature SHA1_SAMPLES = 14.11 -sig 14.12 - val test: unit -> unit 14.13 -end; 14.14 - 14.15 -structure SHA1_Samples: SHA1_SAMPLES = 14.16 -struct 14.17 - 14.18 -fun check (msg, key) = 14.19 - let val key' = SHA1.rep (SHA1.digest msg) in 14.20 - if key = key' then () 14.21 - else 14.22 - raise Fail ("SHA1 library integrity test failed on " ^ quote msg ^ ":\n" ^ 14.23 - key ^ " expected, but\n" ^ key' ^ " was found") 14.24 - end; 14.25 - 14.26 -fun test () = 14.27 - List.app check 14.28 - [("", "da39a3ee5e6b4b0d3255bfef95601890afd80709"), 14.29 - ("a", "86f7e437faa5a7fce15d1ddcb9eaeaea377667b8"), 14.30 - ("abc", "a9993e364706816aba3e25717850c26c9cd0d89d"), 14.31 - ("abcdefghijklmnopqrstuvwxyz", "32d10c7b8cf96570ca04ce37f2a19d84240d3a89"), 14.32 - ("The quick brown fox jumps over the lazy dog", "2fd4e1c67a2d28fced849ee1bb76e7391b93eb12"), 14.33 - (replicate_string 100 "\000", "ed4a77d1b56a118938788fc53037759b6c501e3d"), 14.34 - ("a\000b", "4a3dec2d1f8245280855c42db0ee4239f917fdb8"), 14.35 - ("\000\001", "3f29546453678b855931c174a97d6c0894b8f546")]; 14.36 - 14.37 -val _ = test (); 14.38 - 14.39 -end;

15.1 --- a/src/Pure/Isar/proof.ML Fri Mar 18 12:54:20 2016 +0100 15.2 +++ b/src/Pure/Isar/proof.ML Fri Mar 18 20:29:50 2016 +0100 15.3 @@ -173,6 +173,10 @@ 15.4 (context * thm list list -> state -> state) * 15.5 (context * thm list list -> context -> context)}; 15.6 15.7 +val _ = 15.8 + PolyML.addPrettyPrinter (fn _ => fn _ => fn _: state => 15.9 + Pretty.to_polyml (Pretty.str "<Proof.state>")); 15.10 + 15.11 fun make_goal (statement, using, goal, before_qed, after_qed) = 15.12 Goal {statement = statement, using = using, goal = goal, 15.13 before_qed = before_qed, after_qed = after_qed};

16.1 --- a/src/Pure/Isar/toplevel.ML Fri Mar 18 12:54:20 2016 +0100 16.2 +++ b/src/Pure/Isar/toplevel.ML Fri Mar 18 20:29:50 2016 +0100 16.3 @@ -199,6 +199,8 @@ 16.4 16.5 fun pretty_abstract state = Pretty.str ("<Isar " ^ str_of_state state ^ ">"); 16.6 16.7 +val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o pretty_abstract); 16.8 + 16.9 16.10 16.11 (** toplevel transitions **)

17.1 --- a/src/Pure/ML/install_pp_polyml.ML Fri Mar 18 12:54:20 2016 +0100 17.2 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 17.3 @@ -1,164 +0,0 @@ 17.4 -(* Title: Pure/ML/install_pp_polyml.ML 17.5 - Author: Makarius 17.6 - 17.7 -ML toplevel pretty-printing for Poly/ML. 17.8 -*) 17.9 - 17.10 -PolyML.addPrettyPrinter (fn _ => fn _ => fn _: Pretty.T => 17.11 - ML_Pretty.to_polyml (Pretty.to_ML (Pretty.str "<pretty>"))); 17.12 - 17.13 -PolyML.addPrettyPrinter (fn _ => fn _ => fn lexicon => 17.14 - ML_Pretty.to_polyml (Pretty.to_ML (Lexicon.pp_lexicon lexicon))); 17.15 - 17.16 -PolyML.addPrettyPrinter (fn _ => fn _ => fn task => 17.17 - ML_Pretty.to_polyml (Pretty.to_ML (Pretty.str (Task_Queue.str_of_task task)))); 17.18 - 17.19 -PolyML.addPrettyPrinter (fn _ => fn _ => fn group => 17.20 - ML_Pretty.to_polyml (Pretty.to_ML (Pretty.str (Task_Queue.str_of_group group)))); 17.21 - 17.22 -PolyML.addPrettyPrinter (fn _ => fn _ => fn pos => 17.23 - ML_Pretty.to_polyml (Pretty.to_ML (Pretty.position pos))); 17.24 - 17.25 -PolyML.addPrettyPrinter (fn _ => fn _ => fn binding => 17.26 - ML_Pretty.to_polyml (Pretty.to_ML (Binding.pp binding))); 17.27 - 17.28 -PolyML.addPrettyPrinter (fn _ => fn _ => fn th => 17.29 - ML_Pretty.to_polyml (Pretty.to_ML (Proof_Display.pp_thm Thy_Info.pure_theory th))); 17.30 - 17.31 -PolyML.addPrettyPrinter (fn _ => fn _ => fn ct => 17.32 - ML_Pretty.to_polyml (Pretty.to_ML (Proof_Display.pp_cterm Thy_Info.pure_theory ct))); 17.33 - 17.34 -PolyML.addPrettyPrinter (fn _ => fn _ => fn cT => 17.35 - ML_Pretty.to_polyml (Pretty.to_ML (Proof_Display.pp_ctyp Thy_Info.pure_theory cT))); 17.36 - 17.37 -PolyML.addPrettyPrinter (fn _ => fn _ => fn T => 17.38 - ML_Pretty.to_polyml (Pretty.to_ML (Proof_Display.pp_typ Thy_Info.pure_theory T))); 17.39 - 17.40 -PolyML.addPrettyPrinter (fn _ => fn _ => fn thy => 17.41 - ML_Pretty.to_polyml (Pretty.to_ML (Context.pretty_thy thy))); 17.42 - 17.43 -PolyML.addPrettyPrinter (fn _ => fn _ => fn ctxt => 17.44 - ML_Pretty.to_polyml (Pretty.to_ML (Proof_Display.pp_context ctxt))); 17.45 - 17.46 -PolyML.addPrettyPrinter (fn _ => fn _ => fn ast => 17.47 - ML_Pretty.to_polyml (Pretty.to_ML (Ast.pretty_ast ast))); 17.48 - 17.49 -PolyML.addPrettyPrinter (fn _ => fn _ => fn path => 17.50 - ML_Pretty.to_polyml (Pretty.to_ML (Path.pretty path))); 17.51 - 17.52 -PolyML.addPrettyPrinter (fn _ => fn _ => fn digest => 17.53 - ML_Pretty.to_polyml (Pretty.to_ML (Pretty.str (quote (SHA1.rep digest))))); 17.54 - 17.55 -PolyML.addPrettyPrinter (fn _ => fn _ => fn _: Proof.state => 17.56 - ML_Pretty.to_polyml (Pretty.to_ML (Pretty.str "<Proof.state>"))); 17.57 - 17.58 -PolyML.addPrettyPrinter (fn _ => fn _ => fn st => 17.59 - ML_Pretty.to_polyml (Pretty.to_ML (Toplevel.pretty_abstract st))); 17.60 - 17.61 -PolyML.addPrettyPrinter (fn _ => fn _ => fn morphism => 17.62 - ML_Pretty.to_polyml (Pretty.to_ML (Morphism.pretty morphism))); 17.63 - 17.64 -PolyML.addPrettyPrinter (fn depth => fn _ => fn str => 17.65 - ML_Pretty.to_polyml (Pretty.to_ML (ML_Syntax.pretty_string (FixedInt.toInt (depth * 100)) str))); 17.66 - 17.67 -PolyML.addPrettyPrinter (fn depth => fn _ => fn tree => 17.68 - ML_Pretty.to_polyml (Pretty.to_ML (XML.pretty (FixedInt.toInt depth) tree))); 17.69 - 17.70 -PolyML.addPrettyPrinter (fn depth => fn pretty => fn var => 17.71 - pretty (Synchronized.value var, depth)); 17.72 - 17.73 -PolyML.addPrettyPrinter (fn depth => fn pretty => fn x => 17.74 - (case Future.peek x of 17.75 - NONE => PolyML.PrettyString "<future>" 17.76 - | SOME (Exn.Exn _) => PolyML.PrettyString "<failed>" 17.77 - | SOME (Exn.Res y) => pretty (y, depth))); 17.78 - 17.79 -PolyML.addPrettyPrinter (fn depth => fn pretty => fn x => 17.80 - (case Lazy.peek x of 17.81 - NONE => PolyML.PrettyString "<lazy>" 17.82 - | SOME (Exn.Exn _) => PolyML.PrettyString "<failed>" 17.83 - | SOME (Exn.Res y) => pretty (y, depth))); 17.84 - 17.85 - 17.86 -local 17.87 - 17.88 -open PolyML; 17.89 -val from_ML = Pretty.from_ML o ML_Pretty.from_polyml; 17.90 -fun prt_app name prt = Pretty.block [Pretty.str (name ^ " "), prt]; 17.91 -fun prt_apps name = Pretty.enum "," (name ^ " (") ")"; 17.92 - 17.93 -fun prt_term parens (dp: FixedInt.int) t = 17.94 - if dp <= 0 then Pretty.str "..." 17.95 - else 17.96 - (case t of 17.97 - _ $ _ => 17.98 - op :: (strip_comb t) 17.99 - |> map_index (fn (i, u) => prt_term true (dp - FixedInt.fromInt i - 1) u) 17.100 - |> Pretty.separate " $" 17.101 - |> (if parens then Pretty.enclose "(" ")" else Pretty.block) 17.102 - | Abs (a, T, b) => 17.103 - prt_apps "Abs" 17.104 - [from_ML (prettyRepresentation (a, dp - 1)), 17.105 - from_ML (prettyRepresentation (T, dp - 2)), 17.106 - prt_term false (dp - 3) b] 17.107 - | Const a => prt_app "Const" (from_ML (prettyRepresentation (a, dp - 1))) 17.108 - | Free a => prt_app "Free" (from_ML (prettyRepresentation (a, dp - 1))) 17.109 - | Var a => prt_app "Var" (from_ML (prettyRepresentation (a, dp - 1))) 17.110 - | Bound a => prt_app "Bound" (from_ML (prettyRepresentation (a, dp - 1)))); 17.111 - 17.112 -in 17.113 - 17.114 -val _ = 17.115 - PolyML.addPrettyPrinter (fn depth => fn _ => fn t => 17.116 - ML_Pretty.to_polyml (Pretty.to_ML (prt_term false depth t))); 17.117 - 17.118 -local 17.119 - 17.120 -fun prt_proof parens dp prf = 17.121 - if dp <= 0 then Pretty.str "..." 17.122 - else 17.123 - (case prf of 17.124 - _ % _ => prt_proofs parens dp prf 17.125 - | _ %% _ => prt_proofs parens dp prf 17.126 - | Abst (a, T, b) => 17.127 - prt_apps "Abst" 17.128 - [from_ML (prettyRepresentation (a, dp - 1)), 17.129 - from_ML (prettyRepresentation (T, dp - 2)), 17.130 - prt_proof false (dp - 3) b] 17.131 - | AbsP (a, t, b) => 17.132 - prt_apps "AbsP" 17.133 - [from_ML (prettyRepresentation (a, dp - 1)), 17.134 - from_ML (prettyRepresentation (t, dp - 2)), 17.135 - prt_proof false (dp - 3) b] 17.136 - | Hyp t => prt_app "Hyp" (prt_term true (dp - 1) t) 17.137 - | MinProof => Pretty.str "MinProof" 17.138 - | PBound a => prt_app "PBound" (from_ML (prettyRepresentation (a, dp - 1))) 17.139 - | PAxm a => prt_app "PAxm" (from_ML (prettyRepresentation (a, dp - 1))) 17.140 - | OfClass a => prt_app "OfClass" (from_ML (prettyRepresentation (a, dp - 1))) 17.141 - | Oracle a => prt_app "Oracle" (from_ML (prettyRepresentation (a, dp - 1))) 17.142 - | Promise a => prt_app "Promise" (from_ML (prettyRepresentation (a, dp - 1))) 17.143 - | PThm a => prt_app "PThm" (from_ML (prettyRepresentation (a, dp - 1)))) 17.144 - 17.145 -and prt_proofs parens dp prf = 17.146 - let 17.147 - val (head, args) = strip_proof prf []; 17.148 - val prts = 17.149 - head (dp - 1) :: flat (map_index (fn (i, prt) => prt (dp - FixedInt.fromInt i - 2)) args); 17.150 - in if parens then Pretty.enclose "(" ")" prts else Pretty.block prts end 17.151 - 17.152 -and strip_proof (p % t) res = 17.153 - strip_proof p 17.154 - ((fn d => [Pretty.str " %", Pretty.brk 1, from_ML (prettyRepresentation (t, d))]) :: res) 17.155 - | strip_proof (p %% q) res = 17.156 - strip_proof p ((fn d => [Pretty.str " %%", Pretty.brk 1, prt_proof true d q]) :: res) 17.157 - | strip_proof p res = (fn d => prt_proof true d p, res); 17.158 - 17.159 -in 17.160 - 17.161 -val _ = 17.162 - PolyML.addPrettyPrinter (fn depth => fn _ => fn prf => 17.163 - ML_Pretty.to_polyml (Pretty.to_ML (prt_proof false depth prf))); 17.164 - 17.165 -end; 17.166 - 17.167 -end;

18.1 --- a/src/Pure/ML/ml_antiquotation.ML Fri Mar 18 12:54:20 2016 +0100 18.2 +++ b/src/Pure/ML/ml_antiquotation.ML Fri Mar 18 20:29:50 2016 +0100 18.3 @@ -38,8 +38,10 @@ 18.4 (fn src => fn () => 18.5 ML_Context.value_decl "position" (ML_Syntax.print_position (#2 (Token.name_of_src src)))) #> 18.6 18.7 + inline (Binding.make ("make_string", @{here})) 18.8 + (Args.context >> (ML_Pretty.make_string_fn o ML_Context.struct_name)) #> 18.9 + 18.10 value (Binding.make ("binding", @{here})) 18.11 (Scan.lift (Parse.position Args.name) >> ML_Syntax.make_binding)); 18.12 18.13 end; 18.14 -

19.1 --- a/src/Pure/ML/ml_antiquotations.ML Fri Mar 18 12:54:20 2016 +0100 19.2 +++ b/src/Pure/ML/ml_antiquotations.ML Fri Mar 18 20:29:50 2016 +0100 19.3 @@ -21,9 +21,6 @@ 19.4 ML_Antiquotation.inline @{binding assert} 19.5 (Scan.succeed "(fn b => if b then () else raise General.Fail \"Assertion failed\")") #> 19.6 19.7 - ML_Antiquotation.inline @{binding make_string} 19.8 - (Args.context >> (ml_make_string o ML_Context.struct_name)) #> 19.9 - 19.10 ML_Antiquotation.declaration @{binding print} 19.11 (Scan.lift (Scan.optional Args.name "Output.writeln")) 19.12 (fn src => fn output => fn ctxt => 19.13 @@ -36,7 +33,8 @@ 19.14 \ (" ^ output ^ ") o (fn s => s ^ Position.here (" ^ 19.15 ML_Syntax.print_position pos ^ "));\n"; 19.16 val body = 19.17 - "(fn x => (" ^ struct_name ^ "." ^ a ^ " (" ^ ml_make_string struct_name ^ " x); x))"; 19.18 + "(fn x => (" ^ struct_name ^ "." ^ a ^ 19.19 + " (" ^ ML_Pretty.make_string_fn struct_name ^ " x); x))"; 19.20 in (K (env, body), ctxt') end)); 19.21 19.22

20.1 --- a/src/Pure/ML/ml_compiler.ML Fri Mar 18 12:54:20 2016 +0100 20.2 +++ b/src/Pure/ML/ml_compiler.ML Fri Mar 18 20:29:50 2016 +0100 20.3 @@ -65,7 +65,7 @@ 20.4 let 20.5 val xml = 20.6 ML_Name_Space.displayTypeExpression (types, depth, space) 20.7 - |> ML_Pretty.from_polyml |> Pretty.from_ML |> Pretty.string_of 20.8 + |> Pretty.from_polyml |> Pretty.string_of 20.9 |> Output.output |> YXML.parse_body; 20.10 in cons (pos, fn () => Markup.ML_typing, fn () => YXML.string_of_body xml) end 20.11 end; 20.12 @@ -144,7 +144,6 @@ 20.13 20.14 fun eval (flags: flags) pos toks = 20.15 let 20.16 - val _ = Secure.deny_ml (); 20.17 val space = ML_Env.make_name_space {SML = #SML flags, exchange = #exchange flags}; 20.18 val opt_context = Context.thread_data (); 20.19 20.20 @@ -193,7 +192,7 @@ 20.21 val pos = Exn_Properties.position_of_location loc; 20.22 val txt = 20.23 (if hard then "ML error" else "ML warning") ^ Position.here pos ^ ":\n" ^ 20.24 - Pretty.string_of (Pretty.from_ML (ML_Pretty.from_polyml msg)); 20.25 + Pretty.string_of (Pretty.from_polyml msg); 20.26 in if hard then err txt else warn txt end; 20.27 20.28 20.29 @@ -205,8 +204,7 @@ 20.30 let 20.31 fun display disp x = 20.32 if depth > 0 then 20.33 - (write (disp x |> ML_Pretty.from_polyml |> Pretty.from_ML |> Pretty.string_of); 20.34 - write "\n") 20.35 + (write (disp x |> Pretty.from_polyml |> Pretty.string_of); write "\n") 20.36 else (); 20.37 20.38 fun apply_fix (a, b) =

21.1 --- a/src/Pure/ML/ml_compiler0.ML Fri Mar 18 12:54:20 2016 +0100 21.2 +++ b/src/Pure/ML/ml_compiler0.ML Fri Mar 18 20:29:50 2016 +0100 21.3 @@ -33,19 +33,20 @@ 21.4 21.5 fun ml_input start_line name txt = 21.6 let 21.7 - fun positions line (#"@" :: #"{" :: #"h" :: #"e" :: #"r" :: #"e" :: #"}" :: cs) res = 21.8 + fun input line (#"@" :: #"{" :: #"h" :: #"e" :: #"r" :: #"e" :: #"}" :: cs) res = 21.9 let val s = "(Position.line_file_only " ^ Int.toString line ^ " \"" ^ name ^ "\")" 21.10 - in positions line cs (s :: res) end 21.11 - | positions line (#"\\" :: #"<" :: cs) res = positions line cs ("\\\\<" :: res) 21.12 - | positions line (#"\n" :: cs) res = positions (line + 1) cs ("\n" :: res) 21.13 - | positions line (c :: cs) res = positions line cs (str c :: res) 21.14 - | positions _ [] res = rev res; 21.15 - in String.concat (positions start_line (String.explode txt) []) end; 21.16 + in input line cs (s :: res) end 21.17 + | input line (#"@" :: #"{" :: #"m" :: #"a" :: #"k" :: #"e" :: #"_" :: 21.18 + #"s" :: #"t" :: #"r" :: #"i" :: #"n" :: #"g" :: #"}" :: cs) res = 21.19 + input line cs (ML_Pretty.make_string_fn "" :: res) 21.20 + | input line (#"\\" :: #"<" :: cs) res = input line cs ("\\\\<" :: res) 21.21 + | input line (#"\n" :: cs) res = input (line + 1) cs ("\n" :: res) 21.22 + | input line (c :: cs) res = input line cs (str c :: res) 21.23 + | input _ [] res = rev res; 21.24 + in String.concat (input start_line (String.explode txt) []) end; 21.25 21.26 fun use_text ({name_space, here, print, error, ...}: context) {line, file, verbose, debug} text = 21.27 let 21.28 - val _ = Secure.deny_ml (); 21.29 - 21.30 val current_line = Unsynchronized.ref line; 21.31 val in_buffer = Unsynchronized.ref (String.explode (ml_input line file text)); 21.32 val out_buffer = Unsynchronized.ref ([]: string list);

22.1 --- a/src/Pure/ML/ml_env.ML Fri Mar 18 12:54:20 2016 +0100 22.2 +++ b/src/Pure/ML/ml_env.ML Fri Mar 18 20:29:50 2016 +0100 22.3 @@ -77,7 +77,7 @@ 22.4 fun forget_structure name = 22.5 Env.map (fn {bootstrap, tables, sml_tables, breakpoints} => 22.6 let 22.7 - val _ = if bootstrap then ML_Name_Space.forget_global_structure name else (); 22.8 + val _ = if bootstrap then ML_Name_Space.forget_structure name else (); 22.9 val tables' = 22.10 (#1 tables, #2 tables, #3 tables, Symtab.delete_safe name (#4 tables), #5 tables, #6 tables); 22.11 in make_data (bootstrap, tables', sml_tables, breakpoints) end);

23.1 --- a/src/Pure/ML/ml_name_space.ML Fri Mar 18 12:54:20 2016 +0100 23.2 +++ b/src/Pure/ML/ml_name_space.ML Fri Mar 18 20:29:50 2016 +0100 23.3 @@ -6,20 +6,23 @@ 23.4 23.5 structure ML_Name_Space = 23.6 struct 23.7 + val critical_structures = ["CInterface", "Foreign", "RunCall", "RuntimeCalls", "Signal"]; 23.8 + 23.9 open PolyML.NameSpace; 23.10 23.11 type T = PolyML.NameSpace.nameSpace; 23.12 val global = PolyML.globalNameSpace; 23.13 - val forget_global_structure = PolyML.Compiler.forgetStructure; 23.14 23.15 type valueVal = Values.value; 23.16 fun displayVal (x, depth, space) = Values.printWithType (x, depth, SOME space); 23.17 fun displayTypeExpression (x, depth, space) = Values.printType (x, depth, SOME space); 23.18 val initial_val = List.filter (fn (a, _) => a <> "use" andalso a <> "exit") (#allVal global ()); 23.19 + val forget_val = PolyML.Compiler.forgetValue; 23.20 23.21 type typeVal = TypeConstrs.typeConstr; 23.22 fun displayType (x, depth, space) = TypeConstrs.print (x, depth, SOME space); 23.23 val initial_type = #allType global (); 23.24 + val forget_type = PolyML.Compiler.forgetType; 23.25 23.26 type fixityVal = Infixes.fixity; 23.27 fun displayFix (_: string, x) = Infixes.print x; 23.28 @@ -27,7 +30,10 @@ 23.29 23.30 type structureVal = Structures.structureVal; 23.31 fun displayStruct (x, depth, space) = Structures.print (x, depth, SOME space); 23.32 - val initial_structure = #allStruct global (); 23.33 + val initial_structure = 23.34 + List.filter (fn (a, _) => not (List.exists (fn b => a = b) critical_structures)) 23.35 + (#allStruct global ()); 23.36 + val forget_structure = PolyML.Compiler.forgetStructure; 23.37 23.38 type signatureVal = Signatures.signatureVal; 23.39 fun displaySig (x, depth, space) = Signatures.print (x, depth, SOME space);

24.1 --- a/src/Pure/ML/ml_options.ML Fri Mar 18 12:54:20 2016 +0100 24.2 +++ b/src/Pure/ML/ml_options.ML Fri Mar 18 20:29:50 2016 +0100 24.3 @@ -66,12 +66,12 @@ 24.4 (* print depth *) 24.5 24.6 val print_depth_raw = 24.7 - Config.declare ("ML_print_depth", @{here}) (fn _ => Config.Int (get_default_print_depth ())); 24.8 + Config.declare ("ML_print_depth", @{here}) (fn _ => Config.Int (ML_Pretty.get_print_depth ())); 24.9 val print_depth = Config.int print_depth_raw; 24.10 24.11 fun get_print_depth () = 24.12 (case Context.thread_data () of 24.13 - NONE => get_default_print_depth () 24.14 + NONE => ML_Pretty.get_print_depth () 24.15 | SOME context => Config.get_generic context print_depth); 24.16 24.17 fun get_print_depth_default default =

25.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 25.2 +++ b/src/Pure/ML/ml_pp.ML Fri Mar 18 20:29:50 2016 +0100 25.3 @@ -0,0 +1,113 @@ 25.4 +(* Title: Pure/ML/ml_pp.ML 25.5 + Author: Makarius 25.6 + 25.7 +ML toplevel pretty-printing for logical entities. 25.8 +*) 25.9 + 25.10 +structure ML_PP: sig end = 25.11 +struct 25.12 + 25.13 +val _ = 25.14 + PolyML.addPrettyPrinter 25.15 + (fn _ => fn _ => Pretty.to_polyml o Proof_Display.pp_context); 25.16 + 25.17 +val _ = 25.18 + PolyML.addPrettyPrinter 25.19 + (fn _ => fn _ => Pretty.to_polyml o Proof_Display.pp_thm Thy_Info.pure_theory); 25.20 + 25.21 +val _ = 25.22 + PolyML.addPrettyPrinter 25.23 + (fn _ => fn _ => Pretty.to_polyml o Proof_Display.pp_cterm Thy_Info.pure_theory); 25.24 + 25.25 +val _ = 25.26 + PolyML.addPrettyPrinter 25.27 + (fn _ => fn _ => Pretty.to_polyml o Proof_Display.pp_ctyp Thy_Info.pure_theory); 25.28 + 25.29 +val _ = 25.30 + PolyML.addPrettyPrinter 25.31 + (fn _ => fn _ => Pretty.to_polyml o Proof_Display.pp_typ Thy_Info.pure_theory); 25.32 + 25.33 + 25.34 +local 25.35 + 25.36 +fun prt_app name prt = Pretty.block [Pretty.str (name ^ " "), prt]; 25.37 +fun prt_apps name = Pretty.enum "," (name ^ " (") ")"; 25.38 + 25.39 +fun prt_term parens (dp: FixedInt.int) t = 25.40 + if dp <= 0 then Pretty.str "..." 25.41 + else 25.42 + (case t of 25.43 + _ $ _ => 25.44 + op :: (strip_comb t) 25.45 + |> map_index (fn (i, u) => prt_term true (dp - FixedInt.fromInt i - 1) u) 25.46 + |> Pretty.separate " $" 25.47 + |> (if parens then Pretty.enclose "(" ")" else Pretty.block) 25.48 + | Abs (a, T, b) => 25.49 + prt_apps "Abs" 25.50 + [Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1)), 25.51 + Pretty.from_polyml (PolyML.prettyRepresentation (T, dp - 2)), 25.52 + prt_term false (dp - 3) b] 25.53 + | Const a => prt_app "Const" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1))) 25.54 + | Free a => prt_app "Free" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1))) 25.55 + | Var a => prt_app "Var" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1))) 25.56 + | Bound a => prt_app "Bound" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1)))); 25.57 + 25.58 +in 25.59 + 25.60 +val _ = PolyML.addPrettyPrinter (fn depth => fn _ => Pretty.to_polyml o prt_term false depth); 25.61 + 25.62 + 25.63 +local 25.64 + 25.65 +fun prt_proof parens dp prf = 25.66 + if dp <= 0 then Pretty.str "..." 25.67 + else 25.68 + (case prf of 25.69 + _ % _ => prt_proofs parens dp prf 25.70 + | _ %% _ => prt_proofs parens dp prf 25.71 + | Abst (a, T, b) => 25.72 + prt_apps "Abst" 25.73 + [Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1)), 25.74 + Pretty.from_polyml (PolyML.prettyRepresentation (T, dp - 2)), 25.75 + prt_proof false (dp - 3) b] 25.76 + | AbsP (a, t, b) => 25.77 + prt_apps "AbsP" 25.78 + [Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1)), 25.79 + Pretty.from_polyml (PolyML.prettyRepresentation (t, dp - 2)), 25.80 + prt_proof false (dp - 3) b] 25.81 + | Hyp t => prt_app "Hyp" (prt_term true (dp - 1) t) 25.82 + | MinProof => Pretty.str "MinProof" 25.83 + | PBound a => prt_app "PBound" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1))) 25.84 + | PAxm a => prt_app "PAxm" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1))) 25.85 + | OfClass a => prt_app "OfClass" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1))) 25.86 + | Oracle a => prt_app "Oracle" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1))) 25.87 + | Promise a => prt_app "Promise" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1))) 25.88 + | PThm a => prt_app "PThm" (Pretty.from_polyml (PolyML.prettyRepresentation (a, dp - 1)))) 25.89 + 25.90 +and prt_proofs parens dp prf = 25.91 + let 25.92 + val (head, args) = strip_proof prf []; 25.93 + val prts = 25.94 + head (dp - 1) :: flat (map_index (fn (i, prt) => prt (dp - FixedInt.fromInt i - 2)) args); 25.95 + in if parens then Pretty.enclose "(" ")" prts else Pretty.block prts end 25.96 + 25.97 +and strip_proof (p % t) res = 25.98 + strip_proof p 25.99 + ((fn d => 25.100 + [Pretty.str " %", Pretty.brk 1, 25.101 + Pretty.from_polyml (PolyML.prettyRepresentation (t, d))]) :: res) 25.102 + | strip_proof (p %% q) res = 25.103 + strip_proof p ((fn d => [Pretty.str " %%", Pretty.brk 1, prt_proof true d q]) :: res) 25.104 + | strip_proof p res = (fn d => prt_proof true d p, res); 25.105 + 25.106 +in 25.107 + 25.108 +val _ = 25.109 + PolyML.addPrettyPrinter (fn depth => fn _ => fn prf => 25.110 + ML_Pretty.to_polyml (Pretty.to_ML ~1 (prt_proof false depth prf))); 25.111 + 25.112 +end; 25.113 + 25.114 +end; 25.115 + 25.116 +end;

26.1 --- a/src/Pure/ML/ml_pretty.ML Fri Mar 18 12:54:20 2016 +0100 26.2 +++ b/src/Pure/ML/ml_pretty.ML Fri Mar 18 20:29:50 2016 +0100 26.3 @@ -17,11 +17,18 @@ 26.4 val enum: string -> string -> string -> ('a * int -> pretty) -> 'a list * int -> pretty 26.5 val to_polyml: pretty -> PolyML.pretty 26.6 val from_polyml: PolyML.pretty -> pretty 26.7 + val get_print_depth: unit -> int 26.8 + val print_depth: int -> unit 26.9 + val format_polyml: int -> PolyML.pretty -> string 26.10 + val format: int -> pretty -> string 26.11 + val make_string_fn: string -> string 26.12 end; 26.13 26.14 structure ML_Pretty: ML_PRETTY = 26.15 struct 26.16 26.17 +(* datatype pretty *) 26.18 + 26.19 datatype pretty = 26.20 Block of (string * string) * bool * FixedInt.int * pretty list | 26.21 String of string * FixedInt.int | 26.22 @@ -81,4 +88,43 @@ 26.23 String (s, FixedInt.fromInt (case Int.fromString len of SOME i => i | NONE => size s)) 26.24 in convert "" end; 26.25 26.26 + 26.27 +(* default print depth *) 26.28 + 26.29 +local 26.30 + val depth = Unsynchronized.ref 0; 26.31 +in 26.32 + fun get_print_depth () = ! depth; 26.33 + fun print_depth n = (depth := n; PolyML.print_depth n); 26.34 + val _ = print_depth 10; 26.35 end; 26.36 + 26.37 + 26.38 +(* format *) 26.39 + 26.40 +fun format_polyml margin prt = 26.41 + let 26.42 + val result = Unsynchronized.ref []; 26.43 + val () = PolyML.prettyPrint (fn s => result := s :: ! result, margin) prt 26.44 + in String.concat (List.rev (! result)) end; 26.45 + 26.46 +fun format margin = format_polyml margin o to_polyml; 26.47 + 26.48 + 26.49 +(* make string *) 26.50 + 26.51 +local 26.52 + fun pretty_string_of s = "(fn x => Pretty.string_of (Pretty.from_polyml (" ^ s ^ ")))"; 26.53 + fun pretty_value depth = "PolyML.prettyRepresentation (x, FixedInt.fromInt (" ^ depth ^ "))"; 26.54 +in 26.55 + 26.56 +fun make_string_fn local_env = 26.57 + if local_env <> "" then 26.58 + pretty_string_of (pretty_value (local_env ^ ".ML_print_depth ()")) 26.59 + else if List.exists (fn (a, _) => a = "Pretty") (#allStruct ML_Name_Space.global ()) then 26.60 + pretty_string_of (pretty_value "ML_Pretty.get_print_depth ()") 26.61 + else "(fn x => ML_Pretty.format_polyml 78 (" ^ pretty_value "ML_Pretty.get_print_depth ()" ^ "))"; 26.62 + 26.63 +end; 26.64 + 26.65 +end;

27.1 --- a/src/Pure/ML/ml_syntax.ML Fri Mar 18 12:54:20 2016 +0100 27.2 +++ b/src/Pure/ML/ml_syntax.ML Fri Mar 18 20:29:50 2016 +0100 27.3 @@ -123,4 +123,8 @@ 27.4 else take (Int.max (max_len, 0)) body @ ["..."]; 27.5 in Pretty.str (quote (implode (map print_char body'))) end; 27.6 27.7 +val _ = 27.8 + PolyML.addPrettyPrinter (fn depth => fn _ => fn str => 27.9 + Pretty.to_polyml (pretty_string (FixedInt.toInt (depth * 100)) str)); 27.10 + 27.11 end;

28.1 --- a/src/Pure/PIDE/xml.ML Fri Mar 18 12:54:20 2016 +0100 28.2 +++ b/src/Pure/PIDE/xml.ML Fri Mar 18 20:29:50 2016 +0100 28.3 @@ -170,6 +170,9 @@ 28.4 fun pretty depth tree = 28.5 Pretty.str (Buffer.content (buffer_of (Int.max (0, depth)) tree)); 28.6 28.7 +val _ = 28.8 + PolyML.addPrettyPrinter (fn depth => fn _ => Pretty.to_polyml o pretty (FixedInt.toInt depth)); 28.9 + 28.10 28.11 28.12 (** XML parsing **)

29.1 --- a/src/Pure/ROOT Fri Mar 18 12:54:20 2016 +0100 29.2 +++ b/src/Pure/ROOT Fri Mar 18 20:29:50 2016 +0100 29.3 @@ -47,11 +47,8 @@ 29.4 "General/random.ML" 29.5 "General/same.ML" 29.6 "General/scan.ML" 29.7 - "General/secure.ML" 29.8 "General/seq.ML" 29.9 "General/sha1.ML" 29.10 - "General/sha1_polyml.ML" 29.11 - "General/sha1_samples.ML" 29.12 "General/socket_io.ML" 29.13 "General/source.ML" 29.14 "General/stack.ML" 29.15 @@ -101,7 +98,6 @@ 29.16 "ML/exn_debugger.ML" 29.17 "ML/exn_properties.ML" 29.18 "ML/fixed_int_dummy.ML" 29.19 - "ML/install_pp_polyml.ML" 29.20 "ML/ml_antiquotation.ML" 29.21 "ML/ml_compiler.ML" 29.22 "ML/ml_compiler0.ML" 29.23 @@ -113,6 +109,7 @@ 29.24 "ML/ml_lex.ML" 29.25 "ML/ml_name_space.ML" 29.26 "ML/ml_options.ML" 29.27 + "ML/ml_pp.ML" 29.28 "ML/ml_pretty.ML" 29.29 "ML/ml_profiling.ML" 29.30 "ML/ml_statistics.ML"

30.1 --- a/src/Pure/ROOT.ML Fri Mar 18 12:54:20 2016 +0100 30.2 +++ b/src/Pure/ROOT.ML Fri Mar 18 20:29:50 2016 +0100 30.3 @@ -28,28 +28,21 @@ 30.4 use "Concurrent/multithreading.ML"; 30.5 30.6 use "Concurrent/unsynchronized.ML"; 30.7 -val _ = PolyML.Compiler.forgetValue "ref"; 30.8 -val _ = PolyML.Compiler.forgetType "ref"; 30.9 +val _ = ML_Name_Space.forget_val "ref"; 30.10 +val _ = ML_Name_Space.forget_type "ref"; 30.11 30.12 30.13 (* pervasive environment *) 30.14 30.15 -val _ = PolyML.Compiler.forgetValue "isSome"; 30.16 -val _ = PolyML.Compiler.forgetValue "getOpt"; 30.17 -val _ = PolyML.Compiler.forgetValue "valOf"; 30.18 -val _ = PolyML.Compiler.forgetValue "foldl"; 30.19 -val _ = PolyML.Compiler.forgetValue "foldr"; 30.20 -val _ = PolyML.Compiler.forgetValue "print"; 30.21 -val _ = PolyML.Compiler.forgetValue "explode"; 30.22 -val _ = PolyML.Compiler.forgetValue "concat"; 30.23 +val _ = 30.24 + List.app ML_Name_Space.forget_val 30.25 + ["isSome", "getOpt", "valOf", "foldl", "foldr", "print", "explode", "concat"]; 30.26 30.27 val ord = SML90.ord; 30.28 val chr = SML90.chr; 30.29 val raw_explode = SML90.explode; 30.30 val implode = SML90.implode; 30.31 30.32 -fun quit () = exit 0; 30.33 - 30.34 30.35 (* ML runtime system *) 30.36 30.37 @@ -63,20 +56,11 @@ 30.38 30.39 use "ML/ml_pretty.ML"; 30.40 30.41 -local 30.42 - val depth = Unsynchronized.ref 10; 30.43 -in 30.44 - fun get_default_print_depth () = ! depth; 30.45 - fun default_print_depth n = (depth := n; PolyML.print_depth n); 30.46 - val _ = default_print_depth 10; 30.47 -end; 30.48 - 30.49 val error_depth = PolyML.error_depth; 30.50 30.51 30.52 (* ML compiler *) 30.53 30.54 -use "General/secure.ML"; 30.55 use "ML/ml_compiler0.ML"; 30.56 30.57 PolyML.Compiler.reportUnreferencedIds := true; 30.58 @@ -84,10 +68,6 @@ 30.59 PolyML.Compiler.printInAlphabeticalOrder := false; 30.60 PolyML.Compiler.maxInlineSize := 80; 30.61 30.62 -fun ml_make_string struct_name = 30.63 - "(fn x => Pretty.string_of (Pretty.from_ML (ML_Pretty.from_polyml (PolyML.prettyRepresentation (x, FixedInt.fromInt (" ^ 30.64 - struct_name ^ ".ML_print_depth ()))))))"; 30.65 - 30.66 30.67 (* ML debugger *) 30.68 30.69 @@ -162,10 +142,9 @@ 30.70 use "General/socket_io.ML"; 30.71 use "General/seq.ML"; 30.72 use "General/timing.ML"; 30.73 +use "General/sha1.ML"; 30.74 30.75 -use "General/sha1.ML"; 30.76 -use "General/sha1_polyml.ML"; 30.77 -use "General/sha1_samples.ML"; 30.78 +val _ = List.app ML_Name_Space.forget_structure ML_Name_Space.critical_structures; 30.79 30.80 use "PIDE/yxml.ML"; 30.81 use "PIDE/document_id.ML"; 30.82 @@ -410,8 +389,7 @@ 30.83 30.84 structure Output: OUTPUT = Output; (*seal system channels!*) 30.85 30.86 - 30.87 -use "ML/install_pp_polyml.ML"; 30.88 +use "ML/ml_pp.ML"; 30.89 30.90 30.91 (* the Pure theory *)

31.1 --- a/src/Pure/Syntax/ast.ML Fri Mar 18 12:54:20 2016 +0100 31.2 +++ b/src/Pure/Syntax/ast.ML Fri Mar 18 20:29:50 2016 +0100 31.3 @@ -66,6 +66,8 @@ 31.4 fun pretty_rule (lhs, rhs) = 31.5 Pretty.block [pretty_ast lhs, Pretty.str " ->", Pretty.brk 2, pretty_ast rhs]; 31.6 31.7 +val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o pretty_ast); 31.8 + 31.9 31.10 (* strip_positions *) 31.11

32.1 --- a/src/Pure/Syntax/lexicon.ML Fri Mar 18 12:54:20 2016 +0100 32.2 +++ b/src/Pure/Syntax/lexicon.ML Fri Mar 18 20:29:50 2016 +0100 32.3 @@ -66,7 +66,6 @@ 32.4 val is_marked: string -> bool 32.5 val dummy_type: term 32.6 val fun_type: term 32.7 - val pp_lexicon: Scan.lexicon -> Pretty.T 32.8 end; 32.9 32.10 structure Lexicon: LEXICON = 32.11 @@ -455,7 +454,8 @@ 32.12 32.13 (* toplevel pretty printing *) 32.14 32.15 -val pp_lexicon = 32.16 - Pretty.str_list "{" "}" o map quote o Scan.dest_lexicon; 32.17 +val _ = 32.18 + PolyML.addPrettyPrinter (fn _ => fn _ => 32.19 + Pretty.to_polyml o Pretty.str_list "{" "}" o map quote o Scan.dest_lexicon); 32.20 32.21 end;

33.1 --- a/src/Pure/Syntax/parser.ML Fri Mar 18 12:54:20 2016 +0100 33.2 +++ b/src/Pure/Syntax/parser.ML Fri Mar 18 20:29:50 2016 +0100 33.3 @@ -71,7 +71,7 @@ 33.4 N.B. that the chains parameter has the form [(from, [to])]; 33.5 prod_count is of type "int option" and is only updated if it is <> NONE*) 33.6 fun add_prods _ chains lambdas prod_count [] = (chains, lambdas, prod_count) 33.7 - | add_prods prods chains lambdas prod_count ((lhs, new_prod as (rhs, name, pri)) :: ps) = 33.8 + | add_prods prods chains lambdas prod_count ((lhs, new_prod as (rhs, _, pri)) :: ps) = 33.9 let 33.10 val chain_from = 33.11 (case (pri, rhs) of 33.12 @@ -95,7 +95,7 @@ 33.13 if is_none new_chain then ([], lambdas) 33.14 else 33.15 let (*lookahead of chain's source*) 33.16 - val ((from_nts, from_tks), _) = Array.sub (prods, the new_chain); 33.17 + val ((_, from_tks), _) = Array.sub (prods, the new_chain); 33.18 33.19 (*copy from's lookahead to chain's destinations*) 33.20 fun copy_lookahead [] added = added 33.21 @@ -195,7 +195,7 @@ 33.22 (added_lambdas, added_starts) 33.23 | process_nts (nt :: nts) added_lambdas added_starts = 33.24 let 33.25 - val (lookahead as (old_nts, old_tks), nt_prods) = Array.sub (prods, nt); 33.26 + val ((old_nts, old_tks), nt_prods) = Array.sub (prods, nt); 33.27 33.28 (*existing productions whose lookahead may depend on l*) 33.29 val tk_prods = 33.30 @@ -235,7 +235,7 @@ 33.31 in propagate_lambda (subtract (op =) lambdas lambdas') added_starts lambdas' end; 33.32 33.33 (*insert production into grammar*) 33.34 - val (added_starts', prod_count') = 33.35 + val (added_starts', _) = 33.36 if is_some chain_from 33.37 then (added_starts', prod_count) (*don't store chain production*) 33.38 else 33.39 @@ -255,7 +255,7 @@ 33.40 33.41 (*add lhs NT to list of dependent NTs in lookahead*) 33.42 fun add_nts [] = () 33.43 - | add_nts (nt :: nts) = 33.44 + | add_nts (nt :: _) = 33.45 let val ((old_nts, old_tks), ps) = Array.sub (prods, nt) in 33.46 if member (op =) old_nts lhs then () 33.47 else Array.update (prods, nt, ((lhs :: old_nts, old_tks), ps)) 33.48 @@ -567,7 +567,7 @@ 33.49 filter (fn (_, _, prec: int) => prec >= min_prec andalso prec < max_prec); 33.50 33.51 (*Make states using a list of rhss*) 33.52 -fun mk_states i min_prec lhs_ID rhss = 33.53 +fun mk_states i lhs_ID rhss = 33.54 let fun mk_state (rhs, id, prod_prec) = (lhs_ID, prod_prec, [], rhs, id, i); 33.55 in map mk_state rhss end; 33.56 33.57 @@ -604,11 +604,11 @@ 33.58 A = B andalso prec > min_prec andalso prec <= max_prec 33.59 | _ => false) Si; 33.60 33.61 -fun get_states Estate i ii A max_prec = 33.62 +fun get_states Estate j A max_prec = 33.63 filter 33.64 (fn (_, _, _, Nonterminal (B, prec) :: _, _, _) => A = B andalso prec <= max_prec 33.65 | _ => false) 33.66 - (Array.sub (Estate, ii)); 33.67 + (Array.sub (Estate, j)); 33.68 33.69 33.70 fun movedot_term c (A, j, ts, Terminal a :: sa, id, i) = 33.71 @@ -654,7 +654,7 @@ 33.72 let 33.73 fun all_prods_for nt = prods_for prods chains true c [nt]; 33.74 33.75 - fun processS used [] (Si, Sii) = (Si, Sii) 33.76 + fun processS _ [] (Si, Sii) = (Si, Sii) 33.77 | processS used (S :: States) (Si, Sii) = 33.78 (case S of 33.79 (_, _, _, Nonterminal (nt, min_prec) :: _, _, _) => 33.80 @@ -669,12 +669,12 @@ 33.81 let 33.82 val tk_prods = all_prods_for nt; 33.83 val States' = 33.84 - mk_states i min_prec nt (get_RHS' min_prec used_prec tk_prods); 33.85 + mk_states i nt (get_RHS' min_prec used_prec tk_prods); 33.86 in (update_prec (nt, min_prec) used, movedot_lambda l S @ States') end 33.87 | NONE => (*nonterminal is parsed for the first time*) 33.88 let 33.89 val tk_prods = all_prods_for nt; 33.90 - val States' = mk_states i min_prec nt (get_RHS min_prec tk_prods); 33.91 + val States' = mk_states i nt (get_RHS min_prec tk_prods); 33.92 in (Inttab.update (nt, (min_prec, [])) used, States') end); 33.93 in 33.94 processS used' (new_states @ States) (S :: Si, Sii) 33.95 @@ -692,7 +692,7 @@ 33.96 val States' = map (movedot_nonterm tt) Slist; 33.97 in processS used' (States' @ States) (S :: Si, Sii) end 33.98 else 33.99 - let val Slist = get_states Estate i j A prec 33.100 + let val Slist = get_states Estate j A prec 33.101 in processS used (map (movedot_nonterm tt) Slist @ States) (S :: Si, Sii) end 33.102 end) 33.103 in processS Inttab.empty states ([], []) end;

34.1 --- a/src/Pure/System/bash_windows.ML Fri Mar 18 12:54:20 2016 +0100 34.2 +++ b/src/Pure/System/bash_windows.ML Fri Mar 18 20:29:50 2016 +0100 34.3 @@ -1,4 +1,4 @@ 34.4 -(* Title: Pure/Concurrent/bash_windows.ML 34.5 +(* Title: Pure/System/bash_windows.ML 34.6 Author: Makarius 34.7 34.8 GNU bash processes, with propagation of interrupts -- Windows version.

35.1 --- a/src/Pure/System/isabelle_process.ML Fri Mar 18 12:54:20 2016 +0100 35.2 +++ b/src/Pure/System/isabelle_process.ML Fri Mar 18 20:29:50 2016 +0100 35.3 @@ -190,7 +190,7 @@ 35.4 35.5 val init_protocol = uninterruptible (fn _ => fn socket => 35.6 let 35.7 - val _ = SHA1_Samples.test () 35.8 + val _ = SHA1.test_samples () 35.9 handle exn as Fail msg => (Output.physical_stderr (msg ^ "\n"); Exn.reraise exn); 35.10 val _ = Output.physical_stderr Symbol.STX; 35.11

36.1 --- a/src/Pure/System/isabelle_process.scala Fri Mar 18 12:54:20 2016 +0100 36.2 +++ b/src/Pure/System/isabelle_process.scala Fri Mar 18 20:29:50 2016 +0100 36.3 @@ -14,15 +14,14 @@ 36.4 logic: String = "", 36.5 args: List[String] = Nil, 36.6 modes: List[String] = Nil, 36.7 - secure: Boolean = false, 36.8 receiver: Prover.Receiver = Console.println(_), 36.9 store: Sessions.Store = Sessions.store()): Isabelle_Process = 36.10 { 36.11 val channel = System_Channel() 36.12 val process = 36.13 try { 36.14 - ML_Process(options, logic = logic, args = args, modes = modes, secure = secure, 36.15 - channel = Some(channel), store = store) 36.16 + ML_Process(options, logic = logic, args = args, modes = modes, store = store, 36.17 + channel = Some(channel)) 36.18 } 36.19 catch { case exn @ ERROR(_) => channel.accepted(); throw exn } 36.20 process.stdin.close

37.1 --- a/src/Pure/Tools/build.ML Fri Mar 18 12:54:20 2016 +0100 37.2 +++ b/src/Pure/Tools/build.ML Fri Mar 18 20:29:50 2016 +0100 37.3 @@ -122,7 +122,7 @@ 37.4 37.5 fun build args_file = 37.6 let 37.7 - val _ = SHA1_Samples.test (); 37.8 + val _ = SHA1.test_samples (); 37.9 37.10 val (symbol_codes, (command_timings, (do_output, (verbose, (browser_info, 37.11 (document_files, (graph_file, (parent_name, (chapter, (name, theories)))))))))) =

38.1 --- a/src/Pure/Tools/debugger.ML Fri Mar 18 12:54:20 2016 +0100 38.2 +++ b/src/Pure/Tools/debugger.ML Fri Mar 18 20:29:50 2016 +0100 38.3 @@ -189,9 +189,8 @@ 38.4 val space = ML_Debugger.debug_name_space (the_debug_state thread_name index); 38.5 38.6 fun print x = 38.7 - Pretty.from_ML 38.8 - (ML_Pretty.from_polyml 38.9 - (ML_Name_Space.displayVal (x, FixedInt.fromInt (ML_Options.get_print_depth ()), space))); 38.10 + Pretty.from_polyml 38.11 + (ML_Name_Space.displayVal (x, FixedInt.fromInt (ML_Options.get_print_depth ()), space)); 38.12 fun print_all () = 38.13 #allVal (ML_Debugger.debug_local_name_space (the_debug_state thread_name index)) () 38.14 |> sort_by #1 |> map (Pretty.item o single o print o #2)

39.1 --- a/src/Pure/Tools/ml_process.scala Fri Mar 18 12:54:20 2016 +0100 39.2 +++ b/src/Pure/Tools/ml_process.scala Fri Mar 18 20:29:50 2016 +0100 39.3 @@ -18,7 +18,6 @@ 39.4 dirs: List[Path] = Nil, 39.5 modes: List[String] = Nil, 39.6 raw_ml_system: Boolean = false, 39.7 - secure: Boolean = false, 39.8 cwd: JFile = null, 39.9 env: Map[String, String] = Isabelle_System.settings(), 39.10 redirect: Boolean = false, 39.11 @@ -67,17 +66,15 @@ 39.12 val env_options = Map("ISABELLE_PROCESS_OPTIONS" -> File.standard_path(isabelle_process_options)) 39.13 val eval_options = if (heaps.isEmpty) Nil else List("Options.load_default ()") 39.14 39.15 - val eval_secure = if (secure) List("Secure.set_secure ()") else Nil 39.16 - 39.17 val eval_process = 39.18 if (heaps.isEmpty) 39.19 List("PolyML.print_depth 10") 39.20 else 39.21 channel match { 39.22 case None => 39.23 - List("(default_print_depth 10; Isabelle_Process.init_options ())") 39.24 + List("(ML_Pretty.print_depth 10; Isabelle_Process.init_options ())") 39.25 case Some(ch) => 39.26 - List("(default_print_depth 10; Isabelle_Process.init_protocol " + 39.27 + List("(ML_Pretty.print_depth 10; Isabelle_Process.init_protocol " + 39.28 ML_Syntax.print_string0(ch.server_name) + ")") 39.29 } 39.30 39.31 @@ -88,7 +85,7 @@ 39.32 // bash 39.33 val bash_args = 39.34 Word.explode(Isabelle_System.getenv("ML_OPTIONS")) ::: 39.35 - (eval_init ::: eval_modes ::: eval_options ::: eval_secure ::: eval_process). 39.36 + (eval_init ::: eval_modes ::: eval_options ::: eval_process). 39.37 map(eval => List("--eval", eval)).flatten ::: args 39.38 39.39 Bash.process("""exec "$ML_HOME/poly" -q """ + File.bash_args(bash_args),

40.1 --- a/src/Pure/context.ML Fri Mar 18 12:54:20 2016 +0100 40.2 +++ b/src/Pure/context.ML Fri Mar 18 20:29:50 2016 +0100 40.3 @@ -169,6 +169,8 @@ 40.4 40.5 val pretty_thy = Pretty.str_list "{" "}" o display_names; 40.6 40.7 +val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o pretty_thy); 40.8 + 40.9 fun pretty_abbrev_thy thy = 40.10 let 40.11 val names = display_names thy;

41.1 --- a/src/Pure/morphism.ML Fri Mar 18 12:54:20 2016 +0100 41.2 +++ b/src/Pure/morphism.ML Fri Mar 18 20:29:50 2016 +0100 41.3 @@ -71,6 +71,8 @@ 41.4 41.5 fun pretty (Morphism {names, ...}) = Pretty.enum ";" "{" "}" (map Pretty.str (rev names)); 41.6 41.7 +val _ = PolyML.addPrettyPrinter (fn _ => fn _ => Pretty.to_polyml o pretty); 41.8 + 41.9 fun binding (Morphism {binding, ...}) = apply binding; 41.10 fun typ (Morphism {typ, ...}) = apply typ; 41.11 fun term (Morphism {term, ...}) = apply term;